00:00:00.000 Started by upstream project "autotest-per-patch" build number 132533 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.125 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.127 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.173 Fetching changes from the remote Git repository 00:00:00.175 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.232 Using shallow fetch with depth 1 00:00:00.232 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.232 > git --version # timeout=10 00:00:00.268 > git --version # 'git version 2.39.2' 00:00:00.268 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.678 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.690 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.703 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.703 > git config core.sparsecheckout # timeout=10 00:00:05.716 > git read-tree -mu HEAD # timeout=10 00:00:05.733 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.761 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.761 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.842 [Pipeline] Start of Pipeline 00:00:05.853 [Pipeline] library 00:00:05.855 Loading library shm_lib@master 00:00:05.855 Library shm_lib@master is cached. Copying from home. 00:00:05.870 [Pipeline] node 00:00:05.885 Running on WFP49 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.887 [Pipeline] { 00:00:05.895 [Pipeline] catchError 00:00:05.896 [Pipeline] { 00:00:05.908 [Pipeline] wrap 00:00:05.917 [Pipeline] { 00:00:05.929 [Pipeline] stage 00:00:05.932 [Pipeline] { (Prologue) 00:00:06.156 [Pipeline] sh 00:00:06.438 + logger -p user.info -t JENKINS-CI 00:00:06.452 [Pipeline] echo 00:00:06.453 Node: WFP49 00:00:06.461 [Pipeline] sh 00:00:06.754 [Pipeline] setCustomBuildProperty 00:00:06.766 [Pipeline] echo 00:00:06.767 Cleanup processes 00:00:06.771 [Pipeline] sh 00:00:07.053 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.053 2625713 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.065 [Pipeline] sh 00:00:07.345 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.345 ++ grep -v 'sudo pgrep' 00:00:07.345 ++ awk '{print $1}' 00:00:07.345 + sudo kill -9 00:00:07.345 + true 00:00:07.357 [Pipeline] cleanWs 00:00:07.365 [WS-CLEANUP] Deleting project workspace... 00:00:07.365 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.372 [WS-CLEANUP] done 00:00:07.376 [Pipeline] setCustomBuildProperty 00:00:07.387 [Pipeline] sh 00:00:07.701 + sudo git config --global --replace-all safe.directory '*' 00:00:07.797 [Pipeline] httpRequest 00:00:08.462 [Pipeline] echo 00:00:08.463 Sorcerer 10.211.164.101 is alive 00:00:08.473 [Pipeline] retry 00:00:08.475 [Pipeline] { 00:00:08.489 [Pipeline] httpRequest 00:00:08.493 HttpMethod: GET 00:00:08.493 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.494 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.518 Response Code: HTTP/1.1 200 OK 00:00:08.518 Success: Status code 200 is in the accepted range: 200,404 00:00:08.519 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.081 [Pipeline] } 00:00:16.098 [Pipeline] // retry 00:00:16.106 [Pipeline] sh 00:00:16.388 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.409 [Pipeline] httpRequest 00:00:16.820 [Pipeline] echo 00:00:16.822 Sorcerer 10.211.164.101 is alive 00:00:16.832 [Pipeline] retry 00:00:16.834 [Pipeline] { 00:00:16.849 [Pipeline] httpRequest 00:00:16.854 HttpMethod: GET 00:00:16.854 URL: http://10.211.164.101/packages/spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:00:16.855 Sending request to url: http://10.211.164.101/packages/spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:00:16.871 Response Code: HTTP/1.1 200 OK 00:00:16.871 Success: Status code 200 is in the accepted range: 200,404 00:00:16.872 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:01:02.905 [Pipeline] } 00:01:02.924 [Pipeline] // retry 00:01:02.932 [Pipeline] sh 00:01:03.272 + tar --no-same-owner -xf spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:01:05.823 [Pipeline] sh 00:01:06.108 + git -C spdk log --oneline -n5 00:01:06.108 afdec00e1 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:01:06.108 b09de013a nvmf: Get metadata config by not bdev but bdev_desc 00:01:06.108 971ec0126 bdevperf: Add hide_metadata option 00:01:06.108 894d5af2a bdevperf: Get metadata config by not bdev but bdev_desc 00:01:06.108 075fb5b8c bdevperf: Store the result of DIF type check into job structure 00:01:06.119 [Pipeline] } 00:01:06.133 [Pipeline] // stage 00:01:06.142 [Pipeline] stage 00:01:06.144 [Pipeline] { (Prepare) 00:01:06.165 [Pipeline] writeFile 00:01:06.184 [Pipeline] sh 00:01:06.469 + logger -p user.info -t JENKINS-CI 00:01:06.482 [Pipeline] sh 00:01:06.765 + logger -p user.info -t JENKINS-CI 00:01:06.775 [Pipeline] sh 00:01:07.057 + cat autorun-spdk.conf 00:01:07.057 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.058 SPDK_TEST_FUZZER_SHORT=1 00:01:07.058 SPDK_TEST_FUZZER=1 00:01:07.058 SPDK_TEST_SETUP=1 00:01:07.058 SPDK_RUN_UBSAN=1 00:01:07.064 RUN_NIGHTLY=0 00:01:07.070 [Pipeline] readFile 00:01:07.095 [Pipeline] withEnv 00:01:07.097 [Pipeline] { 00:01:07.111 [Pipeline] sh 00:01:07.397 + set -ex 00:01:07.397 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:07.397 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:07.397 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.397 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:07.397 ++ SPDK_TEST_FUZZER=1 00:01:07.397 ++ SPDK_TEST_SETUP=1 00:01:07.397 ++ SPDK_RUN_UBSAN=1 00:01:07.397 ++ RUN_NIGHTLY=0 00:01:07.397 + case $SPDK_TEST_NVMF_NICS in 00:01:07.397 + DRIVERS= 00:01:07.397 + [[ -n '' ]] 00:01:07.397 + exit 0 00:01:07.407 [Pipeline] } 00:01:07.422 [Pipeline] // withEnv 00:01:07.427 [Pipeline] } 00:01:07.437 [Pipeline] // stage 00:01:07.445 [Pipeline] catchError 00:01:07.446 [Pipeline] { 00:01:07.456 [Pipeline] timeout 00:01:07.456 Timeout set to expire in 30 min 00:01:07.457 [Pipeline] { 00:01:07.467 [Pipeline] stage 00:01:07.469 [Pipeline] { (Tests) 00:01:07.479 [Pipeline] sh 00:01:07.760 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.760 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.760 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.760 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:07.760 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:07.760 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.760 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:07.760 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.760 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.760 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.760 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:07.760 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.760 + source /etc/os-release 00:01:07.760 ++ NAME='Fedora Linux' 00:01:07.760 ++ VERSION='39 (Cloud Edition)' 00:01:07.760 ++ ID=fedora 00:01:07.760 ++ VERSION_ID=39 00:01:07.760 ++ VERSION_CODENAME= 00:01:07.760 ++ PLATFORM_ID=platform:f39 00:01:07.760 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:07.760 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.760 ++ LOGO=fedora-logo-icon 00:01:07.760 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:07.760 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.760 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:07.760 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.760 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.760 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.760 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:07.760 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.760 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:07.760 ++ SUPPORT_END=2024-11-12 00:01:07.760 ++ VARIANT='Cloud Edition' 00:01:07.760 ++ VARIANT_ID=cloud 00:01:07.760 + uname -a 00:01:07.760 Linux spdk-wfp-49 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:07.760 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:10.293 Hugepages 00:01:10.293 node hugesize free / total 00:01:10.293 node0 1048576kB 0 / 0 00:01:10.293 node0 2048kB 0 / 0 00:01:10.293 node1 1048576kB 0 / 0 00:01:10.293 node1 2048kB 0 / 0 00:01:10.293 00:01:10.293 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:10.552 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:10.552 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:10.552 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:10.552 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:10.552 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:10.552 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:10.552 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:10.552 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:10.552 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:10.552 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:10.552 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:10.552 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:10.552 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:10.552 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:10.552 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:10.552 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:10.552 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:10.552 + rm -f /tmp/spdk-ld-path 00:01:10.552 + source autorun-spdk.conf 00:01:10.552 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.552 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:10.552 ++ SPDK_TEST_FUZZER=1 00:01:10.552 ++ SPDK_TEST_SETUP=1 00:01:10.552 ++ SPDK_RUN_UBSAN=1 00:01:10.552 ++ RUN_NIGHTLY=0 00:01:10.552 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:10.552 + [[ -n '' ]] 00:01:10.552 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:10.553 + for M in /var/spdk/build-*-manifest.txt 00:01:10.553 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:10.553 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.553 + for M in /var/spdk/build-*-manifest.txt 00:01:10.553 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:10.553 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.553 + for M in /var/spdk/build-*-manifest.txt 00:01:10.553 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:10.553 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.553 ++ uname 00:01:10.553 + [[ Linux == \L\i\n\u\x ]] 00:01:10.553 + sudo dmesg -T 00:01:10.812 + sudo dmesg --clear 00:01:10.812 + dmesg_pid=2627073 00:01:10.812 + [[ Fedora Linux == FreeBSD ]] 00:01:10.812 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.812 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.812 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:10.812 + [[ -x /usr/src/fio-static/fio ]] 00:01:10.812 + export FIO_BIN=/usr/src/fio-static/fio 00:01:10.812 + FIO_BIN=/usr/src/fio-static/fio 00:01:10.812 + sudo dmesg -Tw 00:01:10.812 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:10.812 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:10.812 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:10.812 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.812 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.812 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:10.812 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.812 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.812 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.812 18:51:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:10.812 18:51:27 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.812 18:51:27 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.812 18:51:27 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:10.812 18:51:27 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:10.812 18:51:27 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:10.812 18:51:27 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:10.812 18:51:27 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:10.812 18:51:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:10.812 18:51:27 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.812 18:51:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:10.812 18:51:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:10.812 18:51:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:10.812 18:51:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:10.812 18:51:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:10.812 18:51:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:10.812 18:51:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.812 18:51:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.812 18:51:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.812 18:51:27 -- paths/export.sh@5 -- $ export PATH 00:01:10.812 18:51:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.812 18:51:27 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:10.812 18:51:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:10.812 18:51:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732643487.XXXXXX 00:01:10.812 18:51:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732643487.4VXBzZ 00:01:10.812 18:51:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:10.812 18:51:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:10.812 18:51:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:10.812 18:51:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:10.812 18:51:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:10.812 18:51:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:10.812 18:51:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:10.812 18:51:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.812 18:51:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:10.812 18:51:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:10.812 18:51:27 -- pm/common@17 -- $ local monitor 00:01:10.812 18:51:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.812 18:51:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.812 18:51:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.812 18:51:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.812 18:51:27 -- pm/common@25 -- $ sleep 1 00:01:10.812 18:51:27 -- pm/common@21 -- $ date +%s 00:01:10.812 18:51:27 -- pm/common@21 -- $ date +%s 00:01:10.812 18:51:27 -- pm/common@21 -- $ date +%s 00:01:10.812 18:51:27 -- pm/common@21 -- $ date +%s 00:01:10.812 18:51:27 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643487 00:01:10.812 18:51:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643487 00:01:10.812 18:51:27 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643487 00:01:10.812 18:51:27 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643487 00:01:11.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643487_collect-cpu-temp.pm.log 00:01:11.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643487_collect-cpu-load.pm.log 00:01:11.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643487_collect-vmstat.pm.log 00:01:11.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643487_collect-bmc-pm.bmc.pm.log 00:01:12.011 18:51:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:12.011 18:51:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.011 18:51:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.011 18:51:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:12.011 18:51:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.011 Tue Nov 26 05:51:28 PM UTC 2024 00:01:12.011 18:51:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.011 v25.01-pre-262-gafdec00e1 00:01:12.011 18:51:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.011 18:51:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.011 18:51:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.011 18:51:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:12.011 18:51:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:12.011 18:51:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.011 ************************************ 00:01:12.011 START TEST ubsan 00:01:12.011 ************************************ 00:01:12.011 18:51:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:12.011 using ubsan 00:01:12.011 00:01:12.011 real 0m0.000s 00:01:12.011 user 0m0.000s 00:01:12.011 sys 0m0.000s 00:01:12.011 18:51:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:12.011 18:51:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.011 ************************************ 00:01:12.011 END TEST ubsan 00:01:12.011 ************************************ 00:01:12.011 18:51:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:12.011 18:51:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.011 18:51:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.011 18:51:29 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:12.011 18:51:29 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:12.011 18:51:29 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:12.011 18:51:29 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:12.011 18:51:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:12.011 18:51:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.011 ************************************ 00:01:12.011 START TEST autobuild_llvm_precompile 00:01:12.011 ************************************ 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:12.011 Target: x86_64-redhat-linux-gnu 00:01:12.011 Thread model: posix 00:01:12.011 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:12.011 18:51:29 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:12.270 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:12.270 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:12.839 Using 'verbs' RDMA provider 00:01:28.663 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.873 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:40.873 Creating mk/config.mk...done. 00:01:40.873 Creating mk/cc.flags.mk...done. 00:01:40.873 Type 'make' to build. 00:01:40.873 00:01:40.873 real 0m28.262s 00:01:40.873 user 0m12.730s 00:01:40.873 sys 0m14.698s 00:01:40.873 18:51:57 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.873 18:51:57 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:40.873 ************************************ 00:01:40.873 END TEST autobuild_llvm_precompile 00:01:40.873 ************************************ 00:01:40.873 18:51:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.873 18:51:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.873 18:51:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.873 18:51:57 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:40.873 18:51:57 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:40.873 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:40.873 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:40.873 Using 'verbs' RDMA provider 00:01:54.034 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:04.018 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:04.278 Creating mk/config.mk...done. 00:02:04.278 Creating mk/cc.flags.mk...done. 00:02:04.278 Type 'make' to build. 00:02:04.278 18:52:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:02:04.278 18:52:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.278 18:52:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.278 18:52:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.278 ************************************ 00:02:04.278 START TEST make 00:02:04.278 ************************************ 00:02:04.278 18:52:21 make -- common/autotest_common.sh@1129 -- $ make -j72 00:02:04.538 make[1]: Nothing to be done for 'all'. 00:02:06.452 The Meson build system 00:02:06.452 Version: 1.5.0 00:02:06.452 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:06.452 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.452 Build type: native build 00:02:06.452 Project name: libvfio-user 00:02:06.452 Project version: 0.0.1 00:02:06.452 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:06.452 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:06.452 Host machine cpu family: x86_64 00:02:06.452 Host machine cpu: x86_64 00:02:06.452 Run-time dependency threads found: YES 00:02:06.452 Library dl found: YES 00:02:06.452 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.452 Run-time dependency json-c found: YES 0.17 00:02:06.452 Run-time dependency cmocka found: YES 1.1.7 00:02:06.452 Program pytest-3 found: NO 00:02:06.452 Program flake8 found: NO 00:02:06.452 Program misspell-fixer found: NO 00:02:06.452 Program restructuredtext-lint found: NO 00:02:06.452 Program valgrind found: YES (/usr/bin/valgrind) 00:02:06.452 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.452 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.452 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.452 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.452 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:06.452 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:06.452 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.452 Build targets in project: 8 00:02:06.452 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:06.452 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:06.452 00:02:06.452 libvfio-user 0.0.1 00:02:06.452 00:02:06.452 User defined options 00:02:06.452 buildtype : debug 00:02:06.452 default_library: static 00:02:06.452 libdir : /usr/local/lib 00:02:06.452 00:02:06.452 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.713 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:06.713 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:06.713 [2/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:06.713 [3/36] Compiling C object samples/null.p/null.c.o 00:02:06.713 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:06.713 [5/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:06.713 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:06.713 [7/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:06.713 [8/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:06.713 [9/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:06.713 [10/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:06.713 [11/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:06.713 [12/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:06.713 [13/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:06.713 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:06.713 [15/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:06.713 [16/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:06.713 [17/36] Compiling C object samples/server.p/server.c.o 00:02:06.713 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:06.713 [19/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:06.713 [20/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:06.713 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:06.713 [22/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:06.713 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:06.713 [24/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:06.713 [25/36] Compiling C object samples/client.p/client.c.o 00:02:06.973 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:06.973 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:06.973 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:06.973 [29/36] Linking static target lib/libvfio-user.a 00:02:06.973 [30/36] Linking target samples/client 00:02:06.973 [31/36] Linking target samples/shadow_ioeventfd_server 00:02:06.973 [32/36] Linking target samples/null 00:02:06.973 [33/36] Linking target samples/gpio-pci-idio-16 00:02:06.973 [34/36] Linking target test/unit_tests 00:02:06.973 [35/36] Linking target samples/server 00:02:06.973 [36/36] Linking target samples/lspci 00:02:06.973 INFO: autodetecting backend as ninja 00:02:06.973 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.973 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.539 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:07.539 ninja: no work to do. 00:02:12.821 The Meson build system 00:02:12.821 Version: 1.5.0 00:02:12.821 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:12.821 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:12.821 Build type: native build 00:02:12.821 Program cat found: YES (/usr/bin/cat) 00:02:12.821 Project name: DPDK 00:02:12.821 Project version: 24.03.0 00:02:12.821 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:12.821 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:12.821 Host machine cpu family: x86_64 00:02:12.821 Host machine cpu: x86_64 00:02:12.821 Message: ## Building in Developer Mode ## 00:02:12.821 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.821 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.821 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.821 Program python3 found: YES (/usr/bin/python3) 00:02:12.821 Program cat found: YES (/usr/bin/cat) 00:02:12.821 Compiler for C supports arguments -march=native: YES 00:02:12.821 Checking for size of "void *" : 8 00:02:12.821 Checking for size of "void *" : 8 (cached) 00:02:12.821 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.821 Library m found: YES 00:02:12.821 Library numa found: YES 00:02:12.821 Has header "numaif.h" : YES 00:02:12.821 Library fdt found: NO 00:02:12.821 Library execinfo found: NO 00:02:12.821 Has header "execinfo.h" : YES 00:02:12.821 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.821 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.821 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.821 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.821 Run-time dependency openssl found: YES 3.1.1 00:02:12.821 Run-time dependency libpcap found: YES 1.10.4 00:02:12.821 Has header "pcap.h" with dependency libpcap: YES 00:02:12.821 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.821 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.821 Compiler for C supports arguments -Wformat: YES 00:02:12.821 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:12.821 Compiler for C supports arguments -Wformat-security: YES 00:02:12.821 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.821 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.821 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.821 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.821 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.821 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.821 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.821 Compiler for C supports arguments -Wundef: YES 00:02:12.821 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.821 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.821 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:12.821 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.821 Program objdump found: YES (/usr/bin/objdump) 00:02:12.821 Compiler for C supports arguments -mavx512f: YES 00:02:12.821 Checking if "AVX512 checking" compiles: YES 00:02:12.821 Fetching value of define "__SSE4_2__" : 1 00:02:12.821 Fetching value of define "__AES__" : 1 00:02:12.821 Fetching value of define "__AVX__" : 1 00:02:12.821 Fetching value of define "__AVX2__" : 1 00:02:12.821 Fetching value of define "__AVX512BW__" : 1 00:02:12.821 Fetching value of define "__AVX512CD__" : 1 00:02:12.821 Fetching value of define "__AVX512DQ__" : 1 00:02:12.821 Fetching value of define "__AVX512F__" : 1 00:02:12.821 Fetching value of define "__AVX512VL__" : 1 00:02:12.821 Fetching value of define "__PCLMUL__" : 1 00:02:12.821 Fetching value of define "__RDRND__" : 1 00:02:12.821 Fetching value of define "__RDSEED__" : 1 00:02:12.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.821 Fetching value of define "__znver1__" : (undefined) 00:02:12.821 Fetching value of define "__znver2__" : (undefined) 00:02:12.821 Fetching value of define "__znver3__" : (undefined) 00:02:12.821 Fetching value of define "__znver4__" : (undefined) 00:02:12.821 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:12.821 Message: lib/log: Defining dependency "log" 00:02:12.821 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.821 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.821 Checking for function "getentropy" : NO 00:02:12.821 Message: lib/eal: Defining dependency "eal" 00:02:12.821 Message: lib/ring: Defining dependency "ring" 00:02:12.821 Message: lib/rcu: Defining dependency "rcu" 00:02:12.821 Message: lib/mempool: Defining dependency "mempool" 00:02:12.821 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.821 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.821 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.821 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.821 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.821 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:12.821 Compiler for C supports arguments -mpclmul: YES 00:02:12.822 Compiler for C supports arguments -maes: YES 00:02:12.822 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.822 Compiler for C supports arguments -mavx512bw: YES 00:02:12.822 Compiler for C supports arguments -mavx512dq: YES 00:02:12.822 Compiler for C supports arguments -mavx512vl: YES 00:02:12.822 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.822 Compiler for C supports arguments -mavx2: YES 00:02:12.822 Compiler for C supports arguments -mavx: YES 00:02:12.822 Message: lib/net: Defining dependency "net" 00:02:12.822 Message: lib/meter: Defining dependency "meter" 00:02:12.822 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.822 Message: lib/pci: Defining dependency "pci" 00:02:12.822 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.822 Message: lib/hash: Defining dependency "hash" 00:02:12.822 Message: lib/timer: Defining dependency "timer" 00:02:12.822 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.822 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.822 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.822 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.822 Message: lib/power: Defining dependency "power" 00:02:12.822 Message: lib/reorder: Defining dependency "reorder" 00:02:12.822 Message: lib/security: Defining dependency "security" 00:02:12.822 Has header "linux/userfaultfd.h" : YES 00:02:12.822 Has header "linux/vduse.h" : YES 00:02:12.822 Message: lib/vhost: Defining dependency "vhost" 00:02:12.822 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:12.822 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.822 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.822 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.822 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.822 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.822 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.822 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.822 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.822 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.822 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.822 Configuring doxy-api-html.conf using configuration 00:02:12.822 Configuring doxy-api-man.conf using configuration 00:02:12.822 Program mandb found: YES (/usr/bin/mandb) 00:02:12.822 Program sphinx-build found: NO 00:02:12.822 Configuring rte_build_config.h using configuration 00:02:12.822 Message: 00:02:12.822 ================= 00:02:12.822 Applications Enabled 00:02:12.822 ================= 00:02:12.822 00:02:12.822 apps: 00:02:12.822 00:02:12.822 00:02:12.822 Message: 00:02:12.822 ================= 00:02:12.822 Libraries Enabled 00:02:12.822 ================= 00:02:12.822 00:02:12.822 libs: 00:02:12.822 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.822 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.822 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.822 00:02:12.822 Message: 00:02:12.822 =============== 00:02:12.822 Drivers Enabled 00:02:12.822 =============== 00:02:12.822 00:02:12.822 common: 00:02:12.822 00:02:12.822 bus: 00:02:12.822 pci, vdev, 00:02:12.822 mempool: 00:02:12.822 ring, 00:02:12.822 dma: 00:02:12.822 00:02:12.822 net: 00:02:12.822 00:02:12.822 crypto: 00:02:12.822 00:02:12.822 compress: 00:02:12.822 00:02:12.822 vdpa: 00:02:12.822 00:02:12.822 00:02:12.822 Message: 00:02:12.822 ================= 00:02:12.822 Content Skipped 00:02:12.822 ================= 00:02:12.822 00:02:12.822 apps: 00:02:12.822 dumpcap: explicitly disabled via build config 00:02:12.822 graph: explicitly disabled via build config 00:02:12.822 pdump: explicitly disabled via build config 00:02:12.822 proc-info: explicitly disabled via build config 00:02:12.822 test-acl: explicitly disabled via build config 00:02:12.822 test-bbdev: explicitly disabled via build config 00:02:12.822 test-cmdline: explicitly disabled via build config 00:02:12.822 test-compress-perf: explicitly disabled via build config 00:02:12.822 test-crypto-perf: explicitly disabled via build config 00:02:12.822 test-dma-perf: explicitly disabled via build config 00:02:12.822 test-eventdev: explicitly disabled via build config 00:02:12.822 test-fib: explicitly disabled via build config 00:02:12.822 test-flow-perf: explicitly disabled via build config 00:02:12.822 test-gpudev: explicitly disabled via build config 00:02:12.822 test-mldev: explicitly disabled via build config 00:02:12.822 test-pipeline: explicitly disabled via build config 00:02:12.822 test-pmd: explicitly disabled via build config 00:02:12.822 test-regex: explicitly disabled via build config 00:02:12.822 test-sad: explicitly disabled via build config 00:02:12.822 test-security-perf: explicitly disabled via build config 00:02:12.822 00:02:12.822 libs: 00:02:12.822 argparse: explicitly disabled via build config 00:02:12.822 metrics: explicitly disabled via build config 00:02:12.822 acl: explicitly disabled via build config 00:02:12.822 bbdev: explicitly disabled via build config 00:02:12.822 bitratestats: explicitly disabled via build config 00:02:12.822 bpf: explicitly disabled via build config 00:02:12.822 cfgfile: explicitly disabled via build config 00:02:12.822 distributor: explicitly disabled via build config 00:02:12.822 efd: explicitly disabled via build config 00:02:12.822 eventdev: explicitly disabled via build config 00:02:12.822 dispatcher: explicitly disabled via build config 00:02:12.822 gpudev: explicitly disabled via build config 00:02:12.822 gro: explicitly disabled via build config 00:02:12.822 gso: explicitly disabled via build config 00:02:12.822 ip_frag: explicitly disabled via build config 00:02:12.822 jobstats: explicitly disabled via build config 00:02:12.822 latencystats: explicitly disabled via build config 00:02:12.822 lpm: explicitly disabled via build config 00:02:12.822 member: explicitly disabled via build config 00:02:12.822 pcapng: explicitly disabled via build config 00:02:12.822 rawdev: explicitly disabled via build config 00:02:12.822 regexdev: explicitly disabled via build config 00:02:12.822 mldev: explicitly disabled via build config 00:02:12.822 rib: explicitly disabled via build config 00:02:12.822 sched: explicitly disabled via build config 00:02:12.822 stack: explicitly disabled via build config 00:02:12.822 ipsec: explicitly disabled via build config 00:02:12.822 pdcp: explicitly disabled via build config 00:02:12.822 fib: explicitly disabled via build config 00:02:12.822 port: explicitly disabled via build config 00:02:12.822 pdump: explicitly disabled via build config 00:02:12.822 table: explicitly disabled via build config 00:02:12.822 pipeline: explicitly disabled via build config 00:02:12.822 graph: explicitly disabled via build config 00:02:12.822 node: explicitly disabled via build config 00:02:12.822 00:02:12.822 drivers: 00:02:12.823 common/cpt: not in enabled drivers build config 00:02:12.823 common/dpaax: not in enabled drivers build config 00:02:12.823 common/iavf: not in enabled drivers build config 00:02:12.823 common/idpf: not in enabled drivers build config 00:02:12.823 common/ionic: not in enabled drivers build config 00:02:12.823 common/mvep: not in enabled drivers build config 00:02:12.823 common/octeontx: not in enabled drivers build config 00:02:12.823 bus/auxiliary: not in enabled drivers build config 00:02:12.823 bus/cdx: not in enabled drivers build config 00:02:12.823 bus/dpaa: not in enabled drivers build config 00:02:12.823 bus/fslmc: not in enabled drivers build config 00:02:12.823 bus/ifpga: not in enabled drivers build config 00:02:12.823 bus/platform: not in enabled drivers build config 00:02:12.823 bus/uacce: not in enabled drivers build config 00:02:12.823 bus/vmbus: not in enabled drivers build config 00:02:12.823 common/cnxk: not in enabled drivers build config 00:02:12.823 common/mlx5: not in enabled drivers build config 00:02:12.823 common/nfp: not in enabled drivers build config 00:02:12.823 common/nitrox: not in enabled drivers build config 00:02:12.823 common/qat: not in enabled drivers build config 00:02:12.823 common/sfc_efx: not in enabled drivers build config 00:02:12.823 mempool/bucket: not in enabled drivers build config 00:02:12.823 mempool/cnxk: not in enabled drivers build config 00:02:12.823 mempool/dpaa: not in enabled drivers build config 00:02:12.823 mempool/dpaa2: not in enabled drivers build config 00:02:12.823 mempool/octeontx: not in enabled drivers build config 00:02:12.823 mempool/stack: not in enabled drivers build config 00:02:12.823 dma/cnxk: not in enabled drivers build config 00:02:12.823 dma/dpaa: not in enabled drivers build config 00:02:12.823 dma/dpaa2: not in enabled drivers build config 00:02:12.823 dma/hisilicon: not in enabled drivers build config 00:02:12.823 dma/idxd: not in enabled drivers build config 00:02:12.823 dma/ioat: not in enabled drivers build config 00:02:12.823 dma/skeleton: not in enabled drivers build config 00:02:12.823 net/af_packet: not in enabled drivers build config 00:02:12.823 net/af_xdp: not in enabled drivers build config 00:02:12.823 net/ark: not in enabled drivers build config 00:02:12.823 net/atlantic: not in enabled drivers build config 00:02:12.823 net/avp: not in enabled drivers build config 00:02:12.823 net/axgbe: not in enabled drivers build config 00:02:12.823 net/bnx2x: not in enabled drivers build config 00:02:12.823 net/bnxt: not in enabled drivers build config 00:02:12.823 net/bonding: not in enabled drivers build config 00:02:12.823 net/cnxk: not in enabled drivers build config 00:02:12.823 net/cpfl: not in enabled drivers build config 00:02:12.823 net/cxgbe: not in enabled drivers build config 00:02:12.823 net/dpaa: not in enabled drivers build config 00:02:12.823 net/dpaa2: not in enabled drivers build config 00:02:12.823 net/e1000: not in enabled drivers build config 00:02:12.823 net/ena: not in enabled drivers build config 00:02:12.823 net/enetc: not in enabled drivers build config 00:02:12.823 net/enetfec: not in enabled drivers build config 00:02:12.823 net/enic: not in enabled drivers build config 00:02:12.823 net/failsafe: not in enabled drivers build config 00:02:12.823 net/fm10k: not in enabled drivers build config 00:02:12.823 net/gve: not in enabled drivers build config 00:02:12.823 net/hinic: not in enabled drivers build config 00:02:12.823 net/hns3: not in enabled drivers build config 00:02:12.823 net/i40e: not in enabled drivers build config 00:02:12.823 net/iavf: not in enabled drivers build config 00:02:12.823 net/ice: not in enabled drivers build config 00:02:12.823 net/idpf: not in enabled drivers build config 00:02:12.823 net/igc: not in enabled drivers build config 00:02:12.823 net/ionic: not in enabled drivers build config 00:02:12.823 net/ipn3ke: not in enabled drivers build config 00:02:12.823 net/ixgbe: not in enabled drivers build config 00:02:12.823 net/mana: not in enabled drivers build config 00:02:12.823 net/memif: not in enabled drivers build config 00:02:12.823 net/mlx4: not in enabled drivers build config 00:02:12.823 net/mlx5: not in enabled drivers build config 00:02:12.823 net/mvneta: not in enabled drivers build config 00:02:12.823 net/mvpp2: not in enabled drivers build config 00:02:12.823 net/netvsc: not in enabled drivers build config 00:02:12.823 net/nfb: not in enabled drivers build config 00:02:12.823 net/nfp: not in enabled drivers build config 00:02:12.823 net/ngbe: not in enabled drivers build config 00:02:12.823 net/null: not in enabled drivers build config 00:02:12.823 net/octeontx: not in enabled drivers build config 00:02:12.823 net/octeon_ep: not in enabled drivers build config 00:02:12.823 net/pcap: not in enabled drivers build config 00:02:12.823 net/pfe: not in enabled drivers build config 00:02:12.823 net/qede: not in enabled drivers build config 00:02:12.823 net/ring: not in enabled drivers build config 00:02:12.823 net/sfc: not in enabled drivers build config 00:02:12.823 net/softnic: not in enabled drivers build config 00:02:12.823 net/tap: not in enabled drivers build config 00:02:12.823 net/thunderx: not in enabled drivers build config 00:02:12.823 net/txgbe: not in enabled drivers build config 00:02:12.823 net/vdev_netvsc: not in enabled drivers build config 00:02:12.823 net/vhost: not in enabled drivers build config 00:02:12.823 net/virtio: not in enabled drivers build config 00:02:12.823 net/vmxnet3: not in enabled drivers build config 00:02:12.823 raw/*: missing internal dependency, "rawdev" 00:02:12.823 crypto/armv8: not in enabled drivers build config 00:02:12.823 crypto/bcmfs: not in enabled drivers build config 00:02:12.823 crypto/caam_jr: not in enabled drivers build config 00:02:12.823 crypto/ccp: not in enabled drivers build config 00:02:12.823 crypto/cnxk: not in enabled drivers build config 00:02:12.823 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.823 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.823 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.823 crypto/mlx5: not in enabled drivers build config 00:02:12.823 crypto/mvsam: not in enabled drivers build config 00:02:12.823 crypto/nitrox: not in enabled drivers build config 00:02:12.823 crypto/null: not in enabled drivers build config 00:02:12.823 crypto/octeontx: not in enabled drivers build config 00:02:12.823 crypto/openssl: not in enabled drivers build config 00:02:12.823 crypto/scheduler: not in enabled drivers build config 00:02:12.823 crypto/uadk: not in enabled drivers build config 00:02:12.823 crypto/virtio: not in enabled drivers build config 00:02:12.823 compress/isal: not in enabled drivers build config 00:02:12.823 compress/mlx5: not in enabled drivers build config 00:02:12.823 compress/nitrox: not in enabled drivers build config 00:02:12.823 compress/octeontx: not in enabled drivers build config 00:02:12.823 compress/zlib: not in enabled drivers build config 00:02:12.823 regex/*: missing internal dependency, "regexdev" 00:02:12.823 ml/*: missing internal dependency, "mldev" 00:02:12.823 vdpa/ifc: not in enabled drivers build config 00:02:12.823 vdpa/mlx5: not in enabled drivers build config 00:02:12.823 vdpa/nfp: not in enabled drivers build config 00:02:12.823 vdpa/sfc: not in enabled drivers build config 00:02:12.823 event/*: missing internal dependency, "eventdev" 00:02:12.823 baseband/*: missing internal dependency, "bbdev" 00:02:12.823 gpu/*: missing internal dependency, "gpudev" 00:02:12.823 00:02:12.823 00:02:13.083 Build targets in project: 85 00:02:13.083 00:02:13.083 DPDK 24.03.0 00:02:13.083 00:02:13.083 User defined options 00:02:13.083 buildtype : debug 00:02:13.083 default_library : static 00:02:13.083 libdir : lib 00:02:13.083 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:13.083 c_args : -fPIC -Werror 00:02:13.083 c_link_args : 00:02:13.083 cpu_instruction_set: native 00:02:13.083 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:13.083 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:13.083 enable_docs : false 00:02:13.083 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:13.083 enable_kmods : false 00:02:13.083 max_lcores : 128 00:02:13.083 tests : false 00:02:13.083 00:02:13.083 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.351 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:13.616 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.616 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.616 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.616 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.617 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.617 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.617 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.617 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.617 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.617 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.617 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.617 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.617 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.617 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.617 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.617 [16/268] Linking static target lib/librte_kvargs.a 00:02:13.617 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.617 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.617 [19/268] Linking static target lib/librte_log.a 00:02:13.881 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.881 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.881 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.881 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.142 [24/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.142 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.142 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.142 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.142 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.142 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.142 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.142 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.142 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.142 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.142 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.142 [35/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.142 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.142 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.142 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.142 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.142 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.142 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.142 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.142 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.142 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.142 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.142 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.142 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.142 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.142 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.142 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.142 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.142 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.142 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.142 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.142 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.142 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.142 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.142 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.142 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.142 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.142 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.142 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.142 [63/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.142 [64/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.142 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.142 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.142 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.142 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.142 [69/268] Linking static target lib/librte_telemetry.a 00:02:14.142 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.143 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.143 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.143 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.143 [74/268] Linking static target lib/librte_ring.a 00:02:14.143 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.143 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.143 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.143 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.143 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.143 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.143 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.143 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.143 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.143 [84/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.143 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.143 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.143 [87/268] Linking static target lib/librte_pci.a 00:02:14.143 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.143 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.143 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.143 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.143 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.143 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.143 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.143 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.143 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.143 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.143 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.143 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.143 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.143 [101/268] Linking static target lib/librte_eal.a 00:02:14.143 [102/268] Linking static target lib/librte_rcu.a 00:02:14.402 [103/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.402 [104/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.402 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.402 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.402 [107/268] Linking static target lib/librte_mempool.a 00:02:14.402 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.402 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.402 [110/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.402 [111/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.402 [112/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.402 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.402 [114/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.402 [115/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.402 [116/268] Linking static target lib/librte_mbuf.a 00:02:14.402 [117/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.662 [118/268] Linking target lib/librte_log.so.24.1 00:02:14.662 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.662 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.662 [121/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.662 [122/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.662 [123/268] Linking static target lib/librte_net.a 00:02:14.662 [124/268] Linking static target lib/librte_meter.a 00:02:14.662 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.662 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.662 [127/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.662 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.662 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.663 [130/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.663 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.663 [132/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.663 [133/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:14.663 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.663 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.663 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.663 [137/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.663 [138/268] Linking static target lib/librte_cmdline.a 00:02:14.663 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.663 [140/268] Linking static target lib/librte_timer.a 00:02:14.663 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.663 [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.663 [143/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.663 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.663 [145/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.663 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.663 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.663 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:14.663 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:14.663 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.921 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.921 [152/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.921 [153/268] Linking target lib/librte_kvargs.so.24.1 00:02:14.921 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.921 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.921 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.922 [157/268] Linking static target lib/librte_dmadev.a 00:02:14.922 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.922 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.922 [160/268] Linking target lib/librte_telemetry.so.24.1 00:02:14.922 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.922 [162/268] Linking static target lib/librte_compressdev.a 00:02:14.922 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.922 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.922 [165/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.922 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.922 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:14.922 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.922 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.922 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.922 [171/268] Linking static target lib/librte_reorder.a 00:02:14.922 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.922 [173/268] Linking static target lib/librte_power.a 00:02:14.922 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.922 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.922 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:14.922 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.922 [178/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.922 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.922 [180/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:14.922 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:14.922 [182/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.922 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:14.922 [184/268] Linking static target lib/librte_security.a 00:02:14.922 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:14.922 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.922 [187/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.922 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:14.922 [189/268] Linking static target lib/librte_hash.a 00:02:14.922 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.922 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.922 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.180 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.180 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.180 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.180 [196/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.180 [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.180 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.180 [199/268] Linking static target lib/librte_cryptodev.a 00:02:15.180 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.180 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.180 [202/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.180 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.180 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.180 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.180 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.180 [207/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.180 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.180 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.180 [210/268] Linking static target drivers/librte_bus_pci.a 00:02:15.180 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.180 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.440 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.440 [214/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.440 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.440 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.440 [217/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.440 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.440 [219/268] Linking static target lib/librte_ethdev.a 00:02:15.440 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.440 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.440 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.699 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.957 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.957 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.957 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.958 [227/268] Linking static target lib/librte_vhost.a 00:02:15.958 [228/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.958 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.333 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.268 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.835 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.213 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.213 [234/268] Linking target lib/librte_eal.so.24.1 00:02:26.472 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:26.472 [236/268] Linking target lib/librte_pci.so.24.1 00:02:26.472 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:26.472 [238/268] Linking target lib/librte_timer.so.24.1 00:02:26.472 [239/268] Linking target lib/librte_ring.so.24.1 00:02:26.472 [240/268] Linking target lib/librte_meter.so.24.1 00:02:26.472 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:26.730 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:26.730 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:26.730 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.730 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:26.730 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:26.730 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.730 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:26.730 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:26.988 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.988 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.988 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.988 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:26.988 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:27.246 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:27.246 [256/268] Linking target lib/librte_net.so.24.1 00:02:27.246 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:27.246 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:27.246 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:27.246 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:27.504 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:27.504 [262/268] Linking target lib/librte_security.so.24.1 00:02:27.504 [263/268] Linking target lib/librte_hash.so.24.1 00:02:27.504 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:27.504 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:27.504 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:27.504 [267/268] Linking target lib/librte_power.so.24.1 00:02:27.504 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:27.504 INFO: autodetecting backend as ninja 00:02:27.504 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:28.440 CC lib/ut/ut.o 00:02:28.440 CC lib/log/log.o 00:02:28.698 CC lib/log/log_flags.o 00:02:28.698 CC lib/log/log_deprecated.o 00:02:28.698 CC lib/ut_mock/mock.o 00:02:28.698 LIB libspdk_ut.a 00:02:28.698 LIB libspdk_log.a 00:02:28.698 LIB libspdk_ut_mock.a 00:02:28.956 CC lib/util/bit_array.o 00:02:28.956 CC lib/util/base64.o 00:02:28.956 CC lib/util/crc16.o 00:02:28.956 CC lib/util/cpuset.o 00:02:28.956 CC lib/util/crc32.o 00:02:28.956 CC lib/util/crc32c.o 00:02:28.956 CC lib/util/crc32_ieee.o 00:02:28.956 CC lib/util/crc64.o 00:02:28.956 CC lib/util/dif.o 00:02:28.956 CC lib/util/fd.o 00:02:28.956 CC lib/util/fd_group.o 00:02:28.956 CC lib/util/iov.o 00:02:28.956 CC lib/util/file.o 00:02:28.956 CC lib/util/hexlify.o 00:02:28.956 CC lib/ioat/ioat.o 00:02:28.956 CC lib/util/strerror_tls.o 00:02:28.956 CC lib/util/math.o 00:02:28.956 CC lib/util/net.o 00:02:28.956 CXX lib/trace_parser/trace.o 00:02:28.956 CC lib/util/pipe.o 00:02:28.956 CC lib/util/string.o 00:02:28.956 CC lib/util/zipf.o 00:02:28.956 CC lib/util/uuid.o 00:02:28.956 CC lib/util/xor.o 00:02:28.956 CC lib/dma/dma.o 00:02:28.956 CC lib/util/md5.o 00:02:29.215 CC lib/vfio_user/host/vfio_user_pci.o 00:02:29.215 CC lib/vfio_user/host/vfio_user.o 00:02:29.215 LIB libspdk_dma.a 00:02:29.215 LIB libspdk_ioat.a 00:02:29.215 LIB libspdk_vfio_user.a 00:02:29.215 LIB libspdk_util.a 00:02:29.475 LIB libspdk_trace_parser.a 00:02:29.734 CC lib/json/json_parse.o 00:02:29.734 CC lib/json/json_write.o 00:02:29.734 CC lib/json/json_util.o 00:02:29.734 CC lib/vmd/vmd.o 00:02:29.734 CC lib/vmd/led.o 00:02:29.734 CC lib/rdma_utils/rdma_utils.o 00:02:29.734 CC lib/conf/conf.o 00:02:29.734 CC lib/env_dpdk/pci.o 00:02:29.734 CC lib/env_dpdk/env.o 00:02:29.734 CC lib/env_dpdk/memory.o 00:02:29.734 CC lib/env_dpdk/threads.o 00:02:29.734 CC lib/env_dpdk/init.o 00:02:29.734 CC lib/env_dpdk/pci_vmd.o 00:02:29.734 CC lib/env_dpdk/pci_ioat.o 00:02:29.734 CC lib/env_dpdk/pci_virtio.o 00:02:29.734 CC lib/env_dpdk/pci_idxd.o 00:02:29.734 CC lib/env_dpdk/pci_event.o 00:02:29.734 CC lib/env_dpdk/sigbus_handler.o 00:02:29.734 CC lib/env_dpdk/pci_dpdk.o 00:02:29.734 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:29.734 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:29.734 CC lib/idxd/idxd.o 00:02:29.734 CC lib/idxd/idxd_user.o 00:02:29.734 CC lib/idxd/idxd_kernel.o 00:02:29.734 LIB libspdk_conf.a 00:02:29.734 LIB libspdk_json.a 00:02:29.734 LIB libspdk_rdma_utils.a 00:02:29.993 LIB libspdk_idxd.a 00:02:29.993 LIB libspdk_vmd.a 00:02:29.993 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:29.993 CC lib/jsonrpc/jsonrpc_server.o 00:02:29.993 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.253 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.253 CC lib/rdma_provider/common.o 00:02:30.253 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.253 LIB libspdk_jsonrpc.a 00:02:30.253 LIB libspdk_rdma_provider.a 00:02:30.511 CC lib/rpc/rpc.o 00:02:30.511 LIB libspdk_env_dpdk.a 00:02:30.770 LIB libspdk_rpc.a 00:02:31.027 CC lib/trace/trace.o 00:02:31.027 CC lib/trace/trace_flags.o 00:02:31.027 CC lib/trace/trace_rpc.o 00:02:31.027 CC lib/notify/notify_rpc.o 00:02:31.027 CC lib/notify/notify.o 00:02:31.027 CC lib/keyring/keyring.o 00:02:31.027 CC lib/keyring/keyring_rpc.o 00:02:31.027 LIB libspdk_notify.a 00:02:31.284 LIB libspdk_trace.a 00:02:31.284 LIB libspdk_keyring.a 00:02:31.541 CC lib/thread/thread.o 00:02:31.541 CC lib/thread/iobuf.o 00:02:31.541 CC lib/sock/sock.o 00:02:31.541 CC lib/sock/sock_rpc.o 00:02:31.799 LIB libspdk_sock.a 00:02:32.055 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.055 CC lib/nvme/nvme_ctrlr.o 00:02:32.056 CC lib/nvme/nvme_ns_cmd.o 00:02:32.056 CC lib/nvme/nvme_ns.o 00:02:32.056 CC lib/nvme/nvme_fabric.o 00:02:32.056 CC lib/nvme/nvme_pcie_common.o 00:02:32.056 CC lib/nvme/nvme_pcie.o 00:02:32.056 CC lib/nvme/nvme_qpair.o 00:02:32.056 CC lib/nvme/nvme.o 00:02:32.056 CC lib/nvme/nvme_transport.o 00:02:32.056 CC lib/nvme/nvme_quirks.o 00:02:32.056 CC lib/nvme/nvme_discovery.o 00:02:32.056 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:32.056 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:32.056 CC lib/nvme/nvme_tcp.o 00:02:32.056 CC lib/nvme/nvme_opal.o 00:02:32.056 CC lib/nvme/nvme_io_msg.o 00:02:32.056 CC lib/nvme/nvme_poll_group.o 00:02:32.056 CC lib/nvme/nvme_zns.o 00:02:32.056 CC lib/nvme/nvme_cuse.o 00:02:32.056 CC lib/nvme/nvme_stubs.o 00:02:32.056 CC lib/nvme/nvme_auth.o 00:02:32.056 CC lib/nvme/nvme_vfio_user.o 00:02:32.056 CC lib/nvme/nvme_rdma.o 00:02:32.314 LIB libspdk_thread.a 00:02:32.572 CC lib/blob/blobstore.o 00:02:32.572 CC lib/blob/zeroes.o 00:02:32.572 CC lib/blob/request.o 00:02:32.572 CC lib/vfu_tgt/tgt_endpoint.o 00:02:32.572 CC lib/blob/blob_bs_dev.o 00:02:32.572 CC lib/vfu_tgt/tgt_rpc.o 00:02:32.572 CC lib/virtio/virtio.o 00:02:32.572 CC lib/virtio/virtio_vhost_user.o 00:02:32.572 CC lib/virtio/virtio_pci.o 00:02:32.572 CC lib/virtio/virtio_vfio_user.o 00:02:32.572 CC lib/init/subsystem.o 00:02:32.572 CC lib/init/rpc.o 00:02:32.572 CC lib/init/json_config.o 00:02:32.572 CC lib/init/subsystem_rpc.o 00:02:32.572 CC lib/fsdev/fsdev.o 00:02:32.572 CC lib/fsdev/fsdev_rpc.o 00:02:32.572 CC lib/fsdev/fsdev_io.o 00:02:32.572 CC lib/accel/accel_sw.o 00:02:32.572 CC lib/accel/accel.o 00:02:32.572 CC lib/accel/accel_rpc.o 00:02:32.871 LIB libspdk_init.a 00:02:32.871 LIB libspdk_virtio.a 00:02:32.871 LIB libspdk_vfu_tgt.a 00:02:32.871 LIB libspdk_fsdev.a 00:02:32.871 CC lib/event/reactor.o 00:02:32.871 CC lib/event/app.o 00:02:32.871 CC lib/event/log_rpc.o 00:02:32.871 CC lib/event/app_rpc.o 00:02:32.871 CC lib/event/scheduler_static.o 00:02:33.131 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:33.131 LIB libspdk_event.a 00:02:33.387 LIB libspdk_accel.a 00:02:33.387 LIB libspdk_nvme.a 00:02:33.645 LIB libspdk_fuse_dispatcher.a 00:02:33.645 CC lib/bdev/bdev.o 00:02:33.645 CC lib/bdev/bdev_rpc.o 00:02:33.645 CC lib/bdev/bdev_zone.o 00:02:33.645 CC lib/bdev/part.o 00:02:33.645 CC lib/bdev/scsi_nvme.o 00:02:34.211 LIB libspdk_blob.a 00:02:34.777 CC lib/blobfs/blobfs.o 00:02:34.777 CC lib/blobfs/tree.o 00:02:34.777 CC lib/lvol/lvol.o 00:02:35.036 LIB libspdk_blobfs.a 00:02:35.036 LIB libspdk_lvol.a 00:02:35.295 LIB libspdk_bdev.a 00:02:35.862 CC lib/ftl/ftl_init.o 00:02:35.862 CC lib/ftl/ftl_layout.o 00:02:35.862 CC lib/ftl/ftl_debug.o 00:02:35.862 CC lib/ftl/ftl_core.o 00:02:35.862 CC lib/ftl/ftl_io.o 00:02:35.862 CC lib/ftl/ftl_sb.o 00:02:35.862 CC lib/ftl/ftl_l2p.o 00:02:35.862 CC lib/ftl/ftl_band.o 00:02:35.862 CC lib/ftl/ftl_l2p_flat.o 00:02:35.862 CC lib/ftl/ftl_nv_cache.o 00:02:35.862 CC lib/ftl/ftl_band_ops.o 00:02:35.862 CC lib/ftl/ftl_l2p_cache.o 00:02:35.862 CC lib/ftl/ftl_writer.o 00:02:35.862 CC lib/ftl/ftl_rq.o 00:02:35.862 CC lib/ftl/ftl_reloc.o 00:02:35.862 CC lib/ftl/ftl_p2l.o 00:02:35.862 CC lib/ftl/ftl_p2l_log.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:35.862 CC lib/scsi/lun.o 00:02:35.862 CC lib/scsi/dev.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:35.862 CC lib/scsi/scsi.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:35.862 CC lib/scsi/port.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:35.862 CC lib/nbd/nbd.o 00:02:35.862 CC lib/nbd/nbd_rpc.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:35.862 CC lib/scsi/scsi_bdev.o 00:02:35.862 CC lib/scsi/scsi_pr.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:35.862 CC lib/scsi/scsi_rpc.o 00:02:35.862 CC lib/scsi/task.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:35.862 CC lib/ftl/utils/ftl_conf.o 00:02:35.862 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:35.862 CC lib/ftl/utils/ftl_mempool.o 00:02:35.862 CC lib/ftl/utils/ftl_md.o 00:02:35.862 CC lib/ftl/utils/ftl_bitmap.o 00:02:35.862 CC lib/ftl/utils/ftl_property.o 00:02:35.862 CC lib/nvmf/ctrlr.o 00:02:35.862 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:35.862 CC lib/nvmf/ctrlr_discovery.o 00:02:35.862 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:35.862 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:35.862 CC lib/nvmf/ctrlr_bdev.o 00:02:35.862 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:35.862 CC lib/nvmf/subsystem.o 00:02:35.862 CC lib/nvmf/nvmf.o 00:02:35.862 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:35.862 CC lib/nvmf/nvmf_rpc.o 00:02:35.862 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:35.862 CC lib/nvmf/tcp.o 00:02:35.862 CC lib/ublk/ublk.o 00:02:35.862 CC lib/nvmf/transport.o 00:02:35.862 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:35.862 CC lib/ublk/ublk_rpc.o 00:02:35.862 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:35.862 CC lib/nvmf/mdns_server.o 00:02:35.862 CC lib/nvmf/stubs.o 00:02:35.862 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:35.862 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:35.862 CC lib/nvmf/vfio_user.o 00:02:35.862 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:35.862 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:35.862 CC lib/nvmf/auth.o 00:02:35.862 CC lib/nvmf/rdma.o 00:02:35.862 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:35.862 CC lib/ftl/base/ftl_base_dev.o 00:02:35.862 CC lib/ftl/base/ftl_base_bdev.o 00:02:35.862 CC lib/ftl/ftl_trace.o 00:02:36.121 LIB libspdk_nbd.a 00:02:36.121 LIB libspdk_scsi.a 00:02:36.380 LIB libspdk_ublk.a 00:02:36.380 CC lib/iscsi/iscsi.o 00:02:36.380 CC lib/iscsi/conn.o 00:02:36.380 CC lib/iscsi/init_grp.o 00:02:36.380 CC lib/iscsi/portal_grp.o 00:02:36.380 CC lib/iscsi/param.o 00:02:36.380 CC lib/iscsi/tgt_node.o 00:02:36.639 CC lib/iscsi/iscsi_subsystem.o 00:02:36.639 CC lib/iscsi/iscsi_rpc.o 00:02:36.639 CC lib/iscsi/task.o 00:02:36.639 CC lib/vhost/vhost_rpc.o 00:02:36.639 CC lib/vhost/vhost.o 00:02:36.639 CC lib/vhost/vhost_scsi.o 00:02:36.639 CC lib/vhost/vhost_blk.o 00:02:36.639 CC lib/vhost/rte_vhost_user.o 00:02:36.639 LIB libspdk_ftl.a 00:02:37.208 LIB libspdk_nvmf.a 00:02:37.208 LIB libspdk_vhost.a 00:02:37.208 LIB libspdk_iscsi.a 00:02:37.776 CC module/vfu_device/vfu_virtio.o 00:02:37.776 CC module/env_dpdk/env_dpdk_rpc.o 00:02:37.776 CC module/vfu_device/vfu_virtio_scsi.o 00:02:37.776 CC module/vfu_device/vfu_virtio_fs.o 00:02:37.776 CC module/vfu_device/vfu_virtio_rpc.o 00:02:37.776 CC module/vfu_device/vfu_virtio_blk.o 00:02:37.776 CC module/keyring/linux/keyring_rpc.o 00:02:37.776 CC module/keyring/linux/keyring.o 00:02:37.776 CC module/accel/dsa/accel_dsa.o 00:02:37.776 CC module/accel/dsa/accel_dsa_rpc.o 00:02:37.776 CC module/keyring/file/keyring.o 00:02:37.776 CC module/keyring/file/keyring_rpc.o 00:02:37.776 CC module/accel/ioat/accel_ioat.o 00:02:37.776 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:37.776 CC module/accel/ioat/accel_ioat_rpc.o 00:02:37.776 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:37.776 CC module/sock/posix/posix.o 00:02:37.776 CC module/fsdev/aio/linux_aio_mgr.o 00:02:37.776 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:37.776 CC module/blob/bdev/blob_bdev.o 00:02:37.776 CC module/fsdev/aio/fsdev_aio.o 00:02:37.776 CC module/accel/error/accel_error.o 00:02:37.776 CC module/accel/error/accel_error_rpc.o 00:02:37.776 LIB libspdk_env_dpdk_rpc.a 00:02:37.776 CC module/scheduler/gscheduler/gscheduler.o 00:02:37.776 CC module/accel/iaa/accel_iaa.o 00:02:37.776 CC module/accel/iaa/accel_iaa_rpc.o 00:02:38.035 LIB libspdk_keyring_linux.a 00:02:38.035 LIB libspdk_keyring_file.a 00:02:38.035 LIB libspdk_scheduler_dpdk_governor.a 00:02:38.035 LIB libspdk_scheduler_gscheduler.a 00:02:38.035 LIB libspdk_scheduler_dynamic.a 00:02:38.035 LIB libspdk_accel_ioat.a 00:02:38.035 LIB libspdk_accel_error.a 00:02:38.035 LIB libspdk_accel_iaa.a 00:02:38.035 LIB libspdk_blob_bdev.a 00:02:38.035 LIB libspdk_accel_dsa.a 00:02:38.035 LIB libspdk_vfu_device.a 00:02:38.293 LIB libspdk_sock_posix.a 00:02:38.294 LIB libspdk_fsdev_aio.a 00:02:38.294 CC module/bdev/error/vbdev_error.o 00:02:38.294 CC module/bdev/error/vbdev_error_rpc.o 00:02:38.294 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:38.294 CC module/bdev/iscsi/bdev_iscsi.o 00:02:38.294 CC module/bdev/nvme/bdev_nvme.o 00:02:38.294 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:38.294 CC module/bdev/nvme/nvme_rpc.o 00:02:38.294 CC module/bdev/nvme/vbdev_opal.o 00:02:38.294 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:38.294 CC module/bdev/nvme/bdev_mdns_client.o 00:02:38.294 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:38.294 CC module/bdev/gpt/vbdev_gpt.o 00:02:38.294 CC module/bdev/gpt/gpt.o 00:02:38.294 CC module/bdev/lvol/vbdev_lvol.o 00:02:38.294 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:38.294 CC module/bdev/delay/vbdev_delay.o 00:02:38.294 CC module/bdev/passthru/vbdev_passthru.o 00:02:38.294 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:38.294 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:38.294 CC module/bdev/null/bdev_null.o 00:02:38.294 CC module/bdev/null/bdev_null_rpc.o 00:02:38.553 CC module/bdev/ftl/bdev_ftl.o 00:02:38.553 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:38.553 CC module/blobfs/bdev/blobfs_bdev.o 00:02:38.553 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:38.553 CC module/bdev/malloc/bdev_malloc.o 00:02:38.553 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:38.553 CC module/bdev/aio/bdev_aio.o 00:02:38.553 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.553 CC module/bdev/raid/bdev_raid.o 00:02:38.553 CC module/bdev/raid/bdev_raid_rpc.o 00:02:38.553 CC module/bdev/raid/bdev_raid_sb.o 00:02:38.553 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:38.553 CC module/bdev/raid/raid0.o 00:02:38.553 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:38.553 CC module/bdev/split/vbdev_split.o 00:02:38.553 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:38.553 CC module/bdev/raid/raid1.o 00:02:38.553 CC module/bdev/raid/concat.o 00:02:38.553 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.553 CC module/bdev/split/vbdev_split_rpc.o 00:02:38.553 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.553 LIB libspdk_blobfs_bdev.a 00:02:38.553 LIB libspdk_bdev_error.a 00:02:38.553 LIB libspdk_bdev_null.a 00:02:38.553 LIB libspdk_bdev_passthru.a 00:02:38.553 LIB libspdk_bdev_iscsi.a 00:02:38.812 LIB libspdk_bdev_aio.a 00:02:38.812 LIB libspdk_bdev_delay.a 00:02:38.812 LIB libspdk_bdev_split.a 00:02:38.812 LIB libspdk_bdev_malloc.a 00:02:38.812 LIB libspdk_bdev_gpt.a 00:02:38.812 LIB libspdk_bdev_ftl.a 00:02:38.812 LIB libspdk_bdev_lvol.a 00:02:38.812 LIB libspdk_bdev_zone_block.a 00:02:38.812 LIB libspdk_bdev_virtio.a 00:02:39.071 LIB libspdk_bdev_raid.a 00:02:40.008 LIB libspdk_bdev_nvme.a 00:02:40.576 CC module/event/subsystems/scheduler/scheduler.o 00:02:40.576 CC module/event/subsystems/vmd/vmd.o 00:02:40.576 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:40.576 CC module/event/subsystems/sock/sock.o 00:02:40.576 CC module/event/subsystems/iobuf/iobuf.o 00:02:40.576 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:40.576 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:40.576 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:40.576 CC module/event/subsystems/keyring/keyring.o 00:02:40.576 CC module/event/subsystems/fsdev/fsdev.o 00:02:40.576 LIB libspdk_event_vmd.a 00:02:40.576 LIB libspdk_event_scheduler.a 00:02:40.576 LIB libspdk_event_vhost_blk.a 00:02:40.576 LIB libspdk_event_keyring.a 00:02:40.576 LIB libspdk_event_vfu_tgt.a 00:02:40.576 LIB libspdk_event_sock.a 00:02:40.576 LIB libspdk_event_iobuf.a 00:02:40.576 LIB libspdk_event_fsdev.a 00:02:40.835 CC module/event/subsystems/accel/accel.o 00:02:41.093 LIB libspdk_event_accel.a 00:02:41.351 CC module/event/subsystems/bdev/bdev.o 00:02:41.610 LIB libspdk_event_bdev.a 00:02:41.868 CC module/event/subsystems/scsi/scsi.o 00:02:41.868 CC module/event/subsystems/ublk/ublk.o 00:02:41.868 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.868 CC module/event/subsystems/nbd/nbd.o 00:02:41.868 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.868 LIB libspdk_event_scsi.a 00:02:41.868 LIB libspdk_event_ublk.a 00:02:41.868 LIB libspdk_event_nbd.a 00:02:41.868 LIB libspdk_event_nvmf.a 00:02:42.126 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.126 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.384 LIB libspdk_event_vhost_scsi.a 00:02:42.384 LIB libspdk_event_iscsi.a 00:02:42.650 CC test/rpc_client/rpc_client_test.o 00:02:42.650 TEST_HEADER include/spdk/accel_module.h 00:02:42.650 TEST_HEADER include/spdk/barrier.h 00:02:42.650 TEST_HEADER include/spdk/assert.h 00:02:42.650 TEST_HEADER include/spdk/base64.h 00:02:42.650 TEST_HEADER include/spdk/accel.h 00:02:42.650 TEST_HEADER include/spdk/bdev_module.h 00:02:42.650 TEST_HEADER include/spdk/bdev.h 00:02:42.650 CC app/spdk_nvme_identify/identify.o 00:02:42.650 TEST_HEADER include/spdk/bit_array.h 00:02:42.650 TEST_HEADER include/spdk/bdev_zone.h 00:02:42.650 TEST_HEADER include/spdk/bit_pool.h 00:02:42.650 TEST_HEADER include/spdk/blob_bdev.h 00:02:42.650 TEST_HEADER include/spdk/blobfs.h 00:02:42.650 TEST_HEADER include/spdk/blob.h 00:02:42.650 TEST_HEADER include/spdk/conf.h 00:02:42.650 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:42.650 TEST_HEADER include/spdk/config.h 00:02:42.650 CC app/spdk_top/spdk_top.o 00:02:42.650 TEST_HEADER include/spdk/cpuset.h 00:02:42.650 CC app/spdk_lspci/spdk_lspci.o 00:02:42.650 CXX app/trace/trace.o 00:02:42.650 TEST_HEADER include/spdk/crc64.h 00:02:42.650 TEST_HEADER include/spdk/crc16.h 00:02:42.650 TEST_HEADER include/spdk/dif.h 00:02:42.650 TEST_HEADER include/spdk/dma.h 00:02:42.650 TEST_HEADER include/spdk/endian.h 00:02:42.650 TEST_HEADER include/spdk/crc32.h 00:02:42.650 CC app/spdk_nvme_discover/discovery_aer.o 00:02:42.650 TEST_HEADER include/spdk/env_dpdk.h 00:02:42.650 TEST_HEADER include/spdk/event.h 00:02:42.650 TEST_HEADER include/spdk/fd_group.h 00:02:42.650 CC app/spdk_nvme_perf/perf.o 00:02:42.650 TEST_HEADER include/spdk/env.h 00:02:42.650 CC app/trace_record/trace_record.o 00:02:42.650 TEST_HEADER include/spdk/file.h 00:02:42.650 TEST_HEADER include/spdk/fsdev.h 00:02:42.650 TEST_HEADER include/spdk/fsdev_module.h 00:02:42.650 TEST_HEADER include/spdk/fd.h 00:02:42.650 TEST_HEADER include/spdk/ftl.h 00:02:42.650 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:42.650 TEST_HEADER include/spdk/gpt_spec.h 00:02:42.650 TEST_HEADER include/spdk/histogram_data.h 00:02:42.650 TEST_HEADER include/spdk/idxd.h 00:02:42.650 TEST_HEADER include/spdk/hexlify.h 00:02:42.650 TEST_HEADER include/spdk/idxd_spec.h 00:02:42.650 TEST_HEADER include/spdk/init.h 00:02:42.650 TEST_HEADER include/spdk/ioat.h 00:02:42.650 TEST_HEADER include/spdk/ioat_spec.h 00:02:42.650 TEST_HEADER include/spdk/json.h 00:02:42.650 TEST_HEADER include/spdk/iscsi_spec.h 00:02:42.650 TEST_HEADER include/spdk/keyring.h 00:02:42.650 TEST_HEADER include/spdk/jsonrpc.h 00:02:42.650 TEST_HEADER include/spdk/likely.h 00:02:42.650 TEST_HEADER include/spdk/log.h 00:02:42.650 TEST_HEADER include/spdk/keyring_module.h 00:02:42.650 TEST_HEADER include/spdk/lvol.h 00:02:42.650 TEST_HEADER include/spdk/md5.h 00:02:42.650 TEST_HEADER include/spdk/memory.h 00:02:42.650 TEST_HEADER include/spdk/mmio.h 00:02:42.650 TEST_HEADER include/spdk/nbd.h 00:02:42.650 TEST_HEADER include/spdk/net.h 00:02:42.650 TEST_HEADER include/spdk/nvme.h 00:02:42.650 TEST_HEADER include/spdk/notify.h 00:02:42.650 TEST_HEADER include/spdk/nvme_intel.h 00:02:42.650 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:42.650 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:42.650 TEST_HEADER include/spdk/nvme_spec.h 00:02:42.650 TEST_HEADER include/spdk/nvme_zns.h 00:02:42.650 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:42.650 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:42.650 TEST_HEADER include/spdk/nvmf.h 00:02:42.650 TEST_HEADER include/spdk/nvmf_spec.h 00:02:42.650 TEST_HEADER include/spdk/nvmf_transport.h 00:02:42.650 TEST_HEADER include/spdk/opal.h 00:02:42.650 TEST_HEADER include/spdk/opal_spec.h 00:02:42.650 TEST_HEADER include/spdk/pci_ids.h 00:02:42.650 TEST_HEADER include/spdk/pipe.h 00:02:42.650 TEST_HEADER include/spdk/queue.h 00:02:42.650 TEST_HEADER include/spdk/reduce.h 00:02:42.650 TEST_HEADER include/spdk/rpc.h 00:02:42.650 TEST_HEADER include/spdk/scheduler.h 00:02:42.650 TEST_HEADER include/spdk/scsi.h 00:02:42.650 TEST_HEADER include/spdk/scsi_spec.h 00:02:42.650 TEST_HEADER include/spdk/sock.h 00:02:42.650 TEST_HEADER include/spdk/stdinc.h 00:02:42.650 TEST_HEADER include/spdk/string.h 00:02:42.650 TEST_HEADER include/spdk/thread.h 00:02:42.650 TEST_HEADER include/spdk/trace.h 00:02:42.650 TEST_HEADER include/spdk/trace_parser.h 00:02:42.650 TEST_HEADER include/spdk/tree.h 00:02:42.650 TEST_HEADER include/spdk/ublk.h 00:02:42.650 TEST_HEADER include/spdk/util.h 00:02:42.650 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:42.650 TEST_HEADER include/spdk/uuid.h 00:02:42.650 TEST_HEADER include/spdk/version.h 00:02:42.650 CC app/nvmf_tgt/nvmf_main.o 00:02:42.650 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:42.650 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:42.650 TEST_HEADER include/spdk/vhost.h 00:02:42.650 TEST_HEADER include/spdk/vmd.h 00:02:42.650 TEST_HEADER include/spdk/zipf.h 00:02:42.650 TEST_HEADER include/spdk/xor.h 00:02:42.650 CXX test/cpp_headers/accel.o 00:02:42.650 CXX test/cpp_headers/accel_module.o 00:02:42.650 CXX test/cpp_headers/assert.o 00:02:42.650 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.650 CXX test/cpp_headers/base64.o 00:02:42.650 CXX test/cpp_headers/barrier.o 00:02:42.650 CXX test/cpp_headers/bdev.o 00:02:42.650 CXX test/cpp_headers/bdev_zone.o 00:02:42.650 CXX test/cpp_headers/bdev_module.o 00:02:42.650 CXX test/cpp_headers/bit_pool.o 00:02:42.650 CXX test/cpp_headers/bit_array.o 00:02:42.650 CXX test/cpp_headers/blob_bdev.o 00:02:42.650 CXX test/cpp_headers/blobfs_bdev.o 00:02:42.651 CXX test/cpp_headers/blobfs.o 00:02:42.651 CXX test/cpp_headers/blob.o 00:02:42.651 CXX test/cpp_headers/conf.o 00:02:42.651 CXX test/cpp_headers/config.o 00:02:42.651 CXX test/cpp_headers/cpuset.o 00:02:42.651 CXX test/cpp_headers/crc16.o 00:02:42.651 CXX test/cpp_headers/crc32.o 00:02:42.651 CXX test/cpp_headers/crc64.o 00:02:42.651 CC app/spdk_dd/spdk_dd.o 00:02:42.651 CXX test/cpp_headers/dif.o 00:02:42.651 CXX test/cpp_headers/dma.o 00:02:42.651 CXX test/cpp_headers/endian.o 00:02:42.651 CXX test/cpp_headers/env_dpdk.o 00:02:42.651 CXX test/cpp_headers/env.o 00:02:42.651 CXX test/cpp_headers/event.o 00:02:42.651 CXX test/cpp_headers/fd_group.o 00:02:42.651 CXX test/cpp_headers/fd.o 00:02:42.651 CXX test/cpp_headers/file.o 00:02:42.651 CXX test/cpp_headers/fsdev.o 00:02:42.651 CXX test/cpp_headers/fsdev_module.o 00:02:42.651 CXX test/cpp_headers/ftl.o 00:02:42.651 CXX test/cpp_headers/fuse_dispatcher.o 00:02:42.651 CXX test/cpp_headers/gpt_spec.o 00:02:42.651 CXX test/cpp_headers/hexlify.o 00:02:42.651 CXX test/cpp_headers/histogram_data.o 00:02:42.651 CXX test/cpp_headers/idxd.o 00:02:42.651 CXX test/cpp_headers/idxd_spec.o 00:02:42.651 CXX test/cpp_headers/init.o 00:02:42.651 CXX test/cpp_headers/ioat.o 00:02:42.651 CXX test/cpp_headers/ioat_spec.o 00:02:42.651 CC test/env/vtophys/vtophys.o 00:02:42.651 CC test/thread/poller_perf/poller_perf.o 00:02:42.651 CC test/app/histogram_perf/histogram_perf.o 00:02:42.651 CC test/env/memory/memory_ut.o 00:02:42.651 CC examples/ioat/verify/verify.o 00:02:42.651 CC app/spdk_tgt/spdk_tgt.o 00:02:42.651 CC test/env/pci/pci_ut.o 00:02:42.651 CC test/app/stub/stub.o 00:02:42.651 CC examples/ioat/perf/perf.o 00:02:42.651 CC test/app/jsoncat/jsoncat.o 00:02:42.651 CC examples/util/zipf/zipf.o 00:02:42.651 CC test/thread/lock/spdk_lock.o 00:02:42.651 CXX test/cpp_headers/iscsi_spec.o 00:02:42.651 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:42.651 CC app/fio/nvme/fio_plugin.o 00:02:42.651 CC test/dma/test_dma/test_dma.o 00:02:42.651 CC test/app/bdev_svc/bdev_svc.o 00:02:42.909 LINK spdk_lspci 00:02:42.909 CC app/fio/bdev/fio_plugin.o 00:02:42.909 CC test/env/mem_callbacks/mem_callbacks.o 00:02:42.909 LINK rpc_client_test 00:02:42.909 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.909 LINK spdk_nvme_discover 00:02:42.909 CXX test/cpp_headers/json.o 00:02:42.909 CXX test/cpp_headers/jsonrpc.o 00:02:42.909 CXX test/cpp_headers/keyring.o 00:02:42.909 LINK histogram_perf 00:02:42.909 CXX test/cpp_headers/keyring_module.o 00:02:42.909 LINK jsoncat 00:02:42.909 CXX test/cpp_headers/likely.o 00:02:42.909 CXX test/cpp_headers/log.o 00:02:42.909 CXX test/cpp_headers/lvol.o 00:02:42.909 CXX test/cpp_headers/md5.o 00:02:42.909 CXX test/cpp_headers/memory.o 00:02:42.909 CXX test/cpp_headers/mmio.o 00:02:42.909 CXX test/cpp_headers/nbd.o 00:02:42.909 CXX test/cpp_headers/net.o 00:02:42.909 CXX test/cpp_headers/notify.o 00:02:42.909 CXX test/cpp_headers/nvme.o 00:02:42.909 LINK spdk_trace_record 00:02:42.909 CXX test/cpp_headers/nvme_intel.o 00:02:42.909 CXX test/cpp_headers/nvme_ocssd.o 00:02:42.909 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:42.909 CXX test/cpp_headers/nvme_spec.o 00:02:42.909 LINK vtophys 00:02:42.909 LINK poller_perf 00:02:42.909 CXX test/cpp_headers/nvme_zns.o 00:02:42.909 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.909 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.909 LINK zipf 00:02:42.909 CXX test/cpp_headers/nvmf.o 00:02:42.909 CXX test/cpp_headers/nvmf_spec.o 00:02:42.909 CXX test/cpp_headers/nvmf_transport.o 00:02:42.909 CXX test/cpp_headers/opal.o 00:02:42.909 CXX test/cpp_headers/opal_spec.o 00:02:42.909 CXX test/cpp_headers/pci_ids.o 00:02:42.909 CXX test/cpp_headers/pipe.o 00:02:42.910 CXX test/cpp_headers/queue.o 00:02:42.910 CXX test/cpp_headers/reduce.o 00:02:42.910 CXX test/cpp_headers/rpc.o 00:02:42.910 CXX test/cpp_headers/scheduler.o 00:02:42.910 LINK nvmf_tgt 00:02:42.910 CXX test/cpp_headers/scsi.o 00:02:42.910 LINK env_dpdk_post_init 00:02:42.910 LINK interrupt_tgt 00:02:42.910 CXX test/cpp_headers/scsi_spec.o 00:02:42.910 CXX test/cpp_headers/sock.o 00:02:42.910 CXX test/cpp_headers/stdinc.o 00:02:42.910 CXX test/cpp_headers/string.o 00:02:42.910 LINK stub 00:02:42.910 CXX test/cpp_headers/thread.o 00:02:42.910 LINK iscsi_tgt 00:02:42.910 CXX test/cpp_headers/trace.o 00:02:42.910 LINK ioat_perf 00:02:42.910 LINK verify 00:02:42.910 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.910 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:43.171 LINK bdev_svc 00:02:43.171 LINK spdk_tgt 00:02:43.171 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:43.171 CXX test/cpp_headers/trace_parser.o 00:02:43.171 CXX test/cpp_headers/tree.o 00:02:43.171 CXX test/cpp_headers/ublk.o 00:02:43.171 CXX test/cpp_headers/util.o 00:02:43.171 CXX test/cpp_headers/uuid.o 00:02:43.171 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:43.171 CXX test/cpp_headers/version.o 00:02:43.171 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:43.171 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.171 CXX test/cpp_headers/vfio_user_spec.o 00:02:43.171 CXX test/cpp_headers/vhost.o 00:02:43.171 CXX test/cpp_headers/vmd.o 00:02:43.171 CXX test/cpp_headers/xor.o 00:02:43.171 CXX test/cpp_headers/zipf.o 00:02:43.171 LINK spdk_trace 00:02:43.171 LINK pci_ut 00:02:43.171 LINK spdk_dd 00:02:43.429 LINK test_dma 00:02:43.429 LINK nvme_fuzz 00:02:43.429 LINK spdk_nvme_identify 00:02:43.429 LINK spdk_bdev 00:02:43.429 LINK spdk_nvme 00:02:43.429 LINK spdk_nvme_perf 00:02:43.429 LINK llvm_vfio_fuzz 00:02:43.429 LINK mem_callbacks 00:02:43.429 LINK vhost_fuzz 00:02:43.688 CC examples/vmd/led/led.o 00:02:43.688 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.688 CC examples/sock/hello_world/hello_sock.o 00:02:43.688 CC app/vhost/vhost.o 00:02:43.688 CC examples/idxd/perf/perf.o 00:02:43.688 LINK llvm_nvme_fuzz 00:02:43.688 CC examples/thread/thread/thread_ex.o 00:02:43.688 LINK spdk_top 00:02:43.688 LINK lsvmd 00:02:43.688 LINK led 00:02:43.688 LINK hello_sock 00:02:43.688 LINK vhost 00:02:43.947 LINK idxd_perf 00:02:43.947 LINK thread 00:02:43.947 LINK memory_ut 00:02:43.947 LINK spdk_lock 00:02:44.206 LINK iscsi_fuzz 00:02:44.466 CC examples/nvme/hello_world/hello_world.o 00:02:44.466 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:44.466 CC examples/nvme/abort/abort.o 00:02:44.466 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:44.466 CC examples/nvme/reconnect/reconnect.o 00:02:44.466 CC examples/nvme/arbitration/arbitration.o 00:02:44.466 CC examples/nvme/hotplug/hotplug.o 00:02:44.466 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:44.725 LINK cmb_copy 00:02:44.725 CC test/event/reactor_perf/reactor_perf.o 00:02:44.725 CC test/event/event_perf/event_perf.o 00:02:44.725 CC test/event/app_repeat/app_repeat.o 00:02:44.725 LINK pmr_persistence 00:02:44.725 LINK hello_world 00:02:44.725 CC test/event/reactor/reactor.o 00:02:44.725 LINK hotplug 00:02:44.725 CC test/event/scheduler/scheduler.o 00:02:44.725 LINK reconnect 00:02:44.725 LINK abort 00:02:44.725 LINK arbitration 00:02:44.725 LINK reactor_perf 00:02:44.725 LINK nvme_manage 00:02:44.725 LINK app_repeat 00:02:44.725 LINK event_perf 00:02:44.725 LINK reactor 00:02:44.984 LINK scheduler 00:02:44.984 CC test/nvme/reserve/reserve.o 00:02:44.984 CC test/nvme/connect_stress/connect_stress.o 00:02:44.984 CC test/nvme/compliance/nvme_compliance.o 00:02:44.984 CC test/nvme/reset/reset.o 00:02:44.984 CC test/nvme/e2edp/nvme_dp.o 00:02:44.984 CC test/nvme/aer/aer.o 00:02:44.984 CC test/nvme/sgl/sgl.o 00:02:44.984 CC test/nvme/fused_ordering/fused_ordering.o 00:02:44.984 CC test/nvme/cuse/cuse.o 00:02:44.984 CC test/nvme/simple_copy/simple_copy.o 00:02:44.984 CC test/nvme/fdp/fdp.o 00:02:44.984 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:44.984 CC test/nvme/startup/startup.o 00:02:44.984 CC test/nvme/err_injection/err_injection.o 00:02:44.984 CC test/nvme/overhead/overhead.o 00:02:44.984 CC test/blobfs/mkfs/mkfs.o 00:02:44.984 CC test/nvme/boot_partition/boot_partition.o 00:02:44.984 CC test/accel/dif/dif.o 00:02:45.242 CC test/lvol/esnap/esnap.o 00:02:45.242 LINK connect_stress 00:02:45.242 LINK startup 00:02:45.242 LINK boot_partition 00:02:45.242 LINK reserve 00:02:45.242 LINK doorbell_aers 00:02:45.242 LINK err_injection 00:02:45.242 LINK fused_ordering 00:02:45.242 LINK mkfs 00:02:45.242 LINK simple_copy 00:02:45.242 LINK reset 00:02:45.242 LINK nvme_dp 00:02:45.242 LINK aer 00:02:45.242 LINK sgl 00:02:45.242 LINK overhead 00:02:45.242 LINK fdp 00:02:45.242 LINK nvme_compliance 00:02:45.500 LINK dif 00:02:45.500 CC examples/accel/perf/accel_perf.o 00:02:45.758 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:45.758 CC examples/blob/hello_world/hello_blob.o 00:02:45.758 CC examples/blob/cli/blobcli.o 00:02:45.758 LINK hello_fsdev 00:02:45.758 LINK hello_blob 00:02:46.017 LINK accel_perf 00:02:46.017 LINK blobcli 00:02:46.017 LINK cuse 00:02:46.585 CC examples/bdev/bdevperf/bdevperf.o 00:02:46.585 CC examples/bdev/hello_world/hello_bdev.o 00:02:46.844 LINK hello_bdev 00:02:47.103 LINK bdevperf 00:02:47.103 CC test/bdev/bdevio/bdevio.o 00:02:47.362 LINK bdevio 00:02:48.740 CC examples/nvmf/nvmf/nvmf.o 00:02:48.740 LINK esnap 00:02:48.740 LINK nvmf 00:02:50.120 00:02:50.120 real 0m45.746s 00:02:50.120 user 6m56.561s 00:02:50.120 sys 2m17.887s 00:02:50.120 18:53:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:50.120 18:53:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.120 ************************************ 00:02:50.120 END TEST make 00:02:50.120 ************************************ 00:02:50.120 18:53:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.120 18:53:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.120 18:53:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.120 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.120 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.120 18:53:07 -- pm/common@44 -- $ pid=2627117 00:02:50.120 18:53:07 -- pm/common@50 -- $ kill -TERM 2627117 00:02:50.120 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.120 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.120 18:53:07 -- pm/common@44 -- $ pid=2627118 00:02:50.120 18:53:07 -- pm/common@50 -- $ kill -TERM 2627118 00:02:50.120 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.120 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:50.120 18:53:07 -- pm/common@44 -- $ pid=2627120 00:02:50.120 18:53:07 -- pm/common@50 -- $ kill -TERM 2627120 00:02:50.120 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.120 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:50.120 18:53:07 -- pm/common@44 -- $ pid=2627144 00:02:50.120 18:53:07 -- pm/common@50 -- $ sudo -E kill -TERM 2627144 00:02:50.120 18:53:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:50.120 18:53:07 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:50.120 18:53:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:50.120 18:53:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:50.120 18:53:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:50.379 18:53:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:50.379 18:53:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:50.379 18:53:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:50.380 18:53:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:50.380 18:53:07 -- scripts/common.sh@336 -- # IFS=.-: 00:02:50.380 18:53:07 -- scripts/common.sh@336 -- # read -ra ver1 00:02:50.380 18:53:07 -- scripts/common.sh@337 -- # IFS=.-: 00:02:50.380 18:53:07 -- scripts/common.sh@337 -- # read -ra ver2 00:02:50.380 18:53:07 -- scripts/common.sh@338 -- # local 'op=<' 00:02:50.380 18:53:07 -- scripts/common.sh@340 -- # ver1_l=2 00:02:50.380 18:53:07 -- scripts/common.sh@341 -- # ver2_l=1 00:02:50.380 18:53:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:50.380 18:53:07 -- scripts/common.sh@344 -- # case "$op" in 00:02:50.380 18:53:07 -- scripts/common.sh@345 -- # : 1 00:02:50.380 18:53:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:50.380 18:53:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.380 18:53:07 -- scripts/common.sh@365 -- # decimal 1 00:02:50.380 18:53:07 -- scripts/common.sh@353 -- # local d=1 00:02:50.380 18:53:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:50.380 18:53:07 -- scripts/common.sh@355 -- # echo 1 00:02:50.380 18:53:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:50.380 18:53:07 -- scripts/common.sh@366 -- # decimal 2 00:02:50.380 18:53:07 -- scripts/common.sh@353 -- # local d=2 00:02:50.380 18:53:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:50.380 18:53:07 -- scripts/common.sh@355 -- # echo 2 00:02:50.380 18:53:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:50.380 18:53:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:50.380 18:53:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:50.380 18:53:07 -- scripts/common.sh@368 -- # return 0 00:02:50.380 18:53:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:50.380 18:53:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.380 --rc genhtml_branch_coverage=1 00:02:50.380 --rc genhtml_function_coverage=1 00:02:50.380 --rc genhtml_legend=1 00:02:50.380 --rc geninfo_all_blocks=1 00:02:50.380 --rc geninfo_unexecuted_blocks=1 00:02:50.380 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.380 ' 00:02:50.380 18:53:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.380 --rc genhtml_branch_coverage=1 00:02:50.380 --rc genhtml_function_coverage=1 00:02:50.380 --rc genhtml_legend=1 00:02:50.380 --rc geninfo_all_blocks=1 00:02:50.380 --rc geninfo_unexecuted_blocks=1 00:02:50.380 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.380 ' 00:02:50.380 18:53:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.380 --rc genhtml_branch_coverage=1 00:02:50.380 --rc genhtml_function_coverage=1 00:02:50.380 --rc genhtml_legend=1 00:02:50.380 --rc geninfo_all_blocks=1 00:02:50.380 --rc geninfo_unexecuted_blocks=1 00:02:50.380 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.380 ' 00:02:50.380 18:53:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.380 --rc genhtml_branch_coverage=1 00:02:50.380 --rc genhtml_function_coverage=1 00:02:50.380 --rc genhtml_legend=1 00:02:50.380 --rc geninfo_all_blocks=1 00:02:50.380 --rc geninfo_unexecuted_blocks=1 00:02:50.380 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.380 ' 00:02:50.380 18:53:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.380 18:53:07 -- nvmf/common.sh@7 -- # uname -s 00:02:50.380 18:53:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.380 18:53:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.380 18:53:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.380 18:53:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.380 18:53:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.380 18:53:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.380 18:53:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.380 18:53:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.380 18:53:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.380 18:53:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.380 18:53:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:02:50.380 18:53:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:02:50.380 18:53:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.380 18:53:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.380 18:53:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:50.380 18:53:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.380 18:53:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:50.380 18:53:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:50.380 18:53:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.380 18:53:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.380 18:53:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.380 18:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.380 18:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.380 18:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.380 18:53:07 -- paths/export.sh@5 -- # export PATH 00:02:50.380 18:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.380 18:53:07 -- nvmf/common.sh@51 -- # : 0 00:02:50.380 18:53:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:50.380 18:53:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:50.380 18:53:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.380 18:53:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.380 18:53:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.380 18:53:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:50.380 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:50.380 18:53:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:50.380 18:53:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:50.380 18:53:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:50.380 18:53:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.380 18:53:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.380 18:53:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.380 18:53:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.380 18:53:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:50.380 18:53:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.380 18:53:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:50.380 18:53:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.380 18:53:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.380 18:53:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.380 18:53:07 -- spdk/autotest.sh@48 -- # udevadm_pid=2685700 00:02:50.380 18:53:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.380 18:53:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.380 18:53:07 -- pm/common@17 -- # local monitor 00:02:50.380 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.381 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.381 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.381 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.381 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.381 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.381 18:53:07 -- pm/common@25 -- # sleep 1 00:02:50.381 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.381 18:53:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.381 18:53:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.381 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.381 18:53:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.381 18:53:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.381 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-vmstat.pm.log 00:02:50.381 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-bmc-pm.bmc.pm.log 00:02:50.381 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-cpu-load.pm.log 00:02:50.381 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-cpu-temp.pm.log 00:02:51.320 18:53:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.320 18:53:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:51.320 18:53:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:51.320 18:53:08 -- common/autotest_common.sh@10 -- # set +x 00:02:51.320 18:53:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:51.320 18:53:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:51.320 18:53:08 -- common/autotest_common.sh@10 -- # set +x 00:02:51.320 18:53:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:51.320 18:53:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:51.320 18:53:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:51.320 18:53:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:51.320 18:53:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:51.320 18:53:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:51.579 18:53:08 -- common/autotest_common.sh@1457 -- # uname 00:02:51.579 18:53:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:51.579 18:53:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:51.579 18:53:08 -- common/autotest_common.sh@1477 -- # uname 00:02:51.579 18:53:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:51.579 18:53:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:51.579 18:53:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:51.579 lcov: LCOV version 1.15 00:02:51.579 18:53:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:56.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.130 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:02.751 18:53:19 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:02.751 18:53:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:02.751 18:53:19 -- common/autotest_common.sh@10 -- # set +x 00:03:02.751 18:53:19 -- spdk/autotest.sh@78 -- # rm -f 00:03:02.751 18:53:19 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.039 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:06.039 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.039 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.039 18:53:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:06.039 18:53:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:06.039 18:53:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:06.039 18:53:23 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:06.039 18:53:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:06.039 18:53:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:06.039 18:53:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:06.039 18:53:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.039 18:53:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:06.039 18:53:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:06.039 18:53:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.039 18:53:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:06.039 18:53:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:06.039 18:53:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:06.039 18:53:23 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.039 No valid GPT data, bailing 00:03:06.039 18:53:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.039 18:53:23 -- scripts/common.sh@394 -- # pt= 00:03:06.039 18:53:23 -- scripts/common.sh@395 -- # return 1 00:03:06.039 18:53:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.039 1+0 records in 00:03:06.039 1+0 records out 00:03:06.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481677 s, 218 MB/s 00:03:06.039 18:53:23 -- spdk/autotest.sh@105 -- # sync 00:03:06.039 18:53:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.039 18:53:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.039 18:53:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:11.312 18:53:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:11.312 18:53:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:11.312 18:53:28 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:11.312 18:53:28 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:11.312 18:53:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:11.312 18:53:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:11.312 18:53:28 -- common/autotest_common.sh@10 -- # set +x 00:03:11.312 ************************************ 00:03:11.312 START TEST setup.sh 00:03:11.312 ************************************ 00:03:11.312 18:53:28 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:11.312 * Looking for test storage... 00:03:11.312 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:11.312 18:53:28 setup.sh -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.312 18:53:28 setup.sh -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.312 18:53:28 setup.sh -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.571 18:53:28 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.571 --rc genhtml_branch_coverage=1 00:03:11.571 --rc genhtml_function_coverage=1 00:03:11.571 --rc genhtml_legend=1 00:03:11.571 --rc geninfo_all_blocks=1 00:03:11.571 --rc geninfo_unexecuted_blocks=1 00:03:11.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.571 ' 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.571 --rc genhtml_branch_coverage=1 00:03:11.571 --rc genhtml_function_coverage=1 00:03:11.571 --rc genhtml_legend=1 00:03:11.571 --rc geninfo_all_blocks=1 00:03:11.571 --rc geninfo_unexecuted_blocks=1 00:03:11.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.571 ' 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.571 --rc genhtml_branch_coverage=1 00:03:11.571 --rc genhtml_function_coverage=1 00:03:11.571 --rc genhtml_legend=1 00:03:11.571 --rc geninfo_all_blocks=1 00:03:11.571 --rc geninfo_unexecuted_blocks=1 00:03:11.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.571 ' 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.571 --rc genhtml_branch_coverage=1 00:03:11.571 --rc genhtml_function_coverage=1 00:03:11.571 --rc genhtml_legend=1 00:03:11.571 --rc geninfo_all_blocks=1 00:03:11.571 --rc geninfo_unexecuted_blocks=1 00:03:11.571 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.571 ' 00:03:11.571 18:53:28 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:11.571 18:53:28 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:11.571 18:53:28 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:11.571 18:53:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.571 ************************************ 00:03:11.571 START TEST acl 00:03:11.571 ************************************ 00:03:11.571 18:53:28 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:11.571 * Looking for test storage... 00:03:11.571 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:11.571 18:53:28 setup.sh.acl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.571 18:53:28 setup.sh.acl -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.571 18:53:28 setup.sh.acl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.830 18:53:28 setup.sh.acl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.830 18:53:28 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:11.830 18:53:28 setup.sh.acl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.830 18:53:28 setup.sh.acl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.830 --rc genhtml_branch_coverage=1 00:03:11.830 --rc genhtml_function_coverage=1 00:03:11.830 --rc genhtml_legend=1 00:03:11.831 --rc geninfo_all_blocks=1 00:03:11.831 --rc geninfo_unexecuted_blocks=1 00:03:11.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.831 ' 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.831 --rc genhtml_branch_coverage=1 00:03:11.831 --rc genhtml_function_coverage=1 00:03:11.831 --rc genhtml_legend=1 00:03:11.831 --rc geninfo_all_blocks=1 00:03:11.831 --rc geninfo_unexecuted_blocks=1 00:03:11.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.831 ' 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.831 --rc genhtml_branch_coverage=1 00:03:11.831 --rc genhtml_function_coverage=1 00:03:11.831 --rc genhtml_legend=1 00:03:11.831 --rc geninfo_all_blocks=1 00:03:11.831 --rc geninfo_unexecuted_blocks=1 00:03:11.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.831 ' 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.831 --rc genhtml_branch_coverage=1 00:03:11.831 --rc genhtml_function_coverage=1 00:03:11.831 --rc genhtml_legend=1 00:03:11.831 --rc geninfo_all_blocks=1 00:03:11.831 --rc geninfo_unexecuted_blocks=1 00:03:11.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.831 ' 00:03:11.831 18:53:28 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.831 18:53:28 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:11.831 18:53:28 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:11.831 18:53:28 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:11.831 18:53:28 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:11.831 18:53:28 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:11.831 18:53:28 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:11.831 18:53:28 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.831 18:53:28 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.024 18:53:32 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:16.024 18:53:32 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:16.025 18:53:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.025 18:53:32 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:16.025 18:53:32 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.025 18:53:32 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:18.562 Hugepages 00:03:18.562 node hugesize free / total 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.562 00:03:18.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:18.562 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:18.563 18:53:35 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:18.563 18:53:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.563 18:53:35 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.563 18:53:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.823 ************************************ 00:03:18.823 START TEST denied 00:03:18.823 ************************************ 00:03:18.823 18:53:35 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:18.823 18:53:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:18.823 18:53:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:18.823 18:53:35 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:18.823 18:53:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.823 18:53:35 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:22.127 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.127 18:53:38 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.320 00:03:26.320 real 0m7.458s 00:03:26.320 user 0m2.261s 00:03:26.320 sys 0m4.447s 00:03:26.320 18:53:43 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.320 18:53:43 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:26.320 ************************************ 00:03:26.320 END TEST denied 00:03:26.320 ************************************ 00:03:26.320 18:53:43 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:26.320 18:53:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.320 18:53:43 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.320 18:53:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:26.320 ************************************ 00:03:26.320 START TEST allowed 00:03:26.320 ************************************ 00:03:26.320 18:53:43 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:26.320 18:53:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:26.320 18:53:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:26.320 18:53:43 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:26.320 18:53:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.320 18:53:43 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:32.883 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:32.883 18:53:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:32.883 18:53:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:32.883 18:53:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:32.883 18:53:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.883 18:53:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.171 00:03:36.171 real 0m9.654s 00:03:36.171 user 0m2.215s 00:03:36.171 sys 0m4.258s 00:03:36.171 18:53:52 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.171 18:53:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:36.171 ************************************ 00:03:36.171 END TEST allowed 00:03:36.171 ************************************ 00:03:36.171 00:03:36.171 real 0m24.379s 00:03:36.171 user 0m7.180s 00:03:36.171 sys 0m13.543s 00:03:36.171 18:53:53 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.171 18:53:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:36.171 ************************************ 00:03:36.171 END TEST acl 00:03:36.171 ************************************ 00:03:36.171 18:53:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:36.171 18:53:53 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.171 18:53:53 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.171 18:53:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.171 ************************************ 00:03:36.171 START TEST hugepages 00:03:36.171 ************************************ 00:03:36.171 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:36.171 * Looking for test storage... 00:03:36.171 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:36.171 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:36.171 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lcov --version 00:03:36.171 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.171 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.172 18:53:53 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:36.172 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.172 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.172 --rc genhtml_branch_coverage=1 00:03:36.172 --rc genhtml_function_coverage=1 00:03:36.172 --rc genhtml_legend=1 00:03:36.172 --rc geninfo_all_blocks=1 00:03:36.172 --rc geninfo_unexecuted_blocks=1 00:03:36.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:36.172 ' 00:03:36.172 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.172 --rc genhtml_branch_coverage=1 00:03:36.172 --rc genhtml_function_coverage=1 00:03:36.172 --rc genhtml_legend=1 00:03:36.172 --rc geninfo_all_blocks=1 00:03:36.172 --rc geninfo_unexecuted_blocks=1 00:03:36.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:36.172 ' 00:03:36.172 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.172 --rc genhtml_branch_coverage=1 00:03:36.172 --rc genhtml_function_coverage=1 00:03:36.172 --rc genhtml_legend=1 00:03:36.172 --rc geninfo_all_blocks=1 00:03:36.172 --rc geninfo_unexecuted_blocks=1 00:03:36.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:36.172 ' 00:03:36.172 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.172 --rc genhtml_branch_coverage=1 00:03:36.172 --rc genhtml_function_coverage=1 00:03:36.172 --rc genhtml_legend=1 00:03:36.172 --rc geninfo_all_blocks=1 00:03:36.172 --rc geninfo_unexecuted_blocks=1 00:03:36.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:36.172 ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 72414540 kB' 'MemAvailable: 77304440 kB' 'Buffers: 10116 kB' 'Cached: 12889620 kB' 'SwapCached: 0 kB' 'Active: 9514972 kB' 'Inactive: 4039056 kB' 'Active(anon): 8321988 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657600 kB' 'Mapped: 175184 kB' 'Shmem: 7667696 kB' 'KReclaimable: 493796 kB' 'Slab: 1004984 kB' 'SReclaimable: 493796 kB' 'SUnreclaim: 511188 kB' 'KernelStack: 16048 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434204 kB' 'Committed_AS: 9653772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199908 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.172 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:36.174 18:53:53 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:36.174 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.174 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.174 18:53:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.433 ************************************ 00:03:36.434 START TEST single_node_setup 00:03:36.434 ************************************ 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.434 18:53:53 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:39.723 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.723 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:43.017 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74663464 kB' 'MemAvailable: 79553068 kB' 'Buffers: 10116 kB' 'Cached: 12889756 kB' 'SwapCached: 0 kB' 'Active: 9518056 kB' 'Inactive: 4039056 kB' 'Active(anon): 8325072 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660476 kB' 'Mapped: 175176 kB' 'Shmem: 7667832 kB' 'KReclaimable: 493500 kB' 'Slab: 1004588 kB' 'SReclaimable: 493500 kB' 'SUnreclaim: 511088 kB' 'KernelStack: 16192 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9657968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200052 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.017 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74663236 kB' 'MemAvailable: 79552840 kB' 'Buffers: 10116 kB' 'Cached: 12889760 kB' 'SwapCached: 0 kB' 'Active: 9517788 kB' 'Inactive: 4039056 kB' 'Active(anon): 8324804 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660236 kB' 'Mapped: 175080 kB' 'Shmem: 7667836 kB' 'KReclaimable: 493500 kB' 'Slab: 1004804 kB' 'SReclaimable: 493500 kB' 'SUnreclaim: 511304 kB' 'KernelStack: 16272 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9657988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200036 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:53:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.018 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74677600 kB' 'MemAvailable: 79567204 kB' 'Buffers: 10116 kB' 'Cached: 12889776 kB' 'SwapCached: 0 kB' 'Active: 9518780 kB' 'Inactive: 4039056 kB' 'Active(anon): 8325796 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661404 kB' 'Mapped: 175064 kB' 'Shmem: 7667852 kB' 'KReclaimable: 493500 kB' 'Slab: 1004772 kB' 'SReclaimable: 493500 kB' 'SUnreclaim: 511272 kB' 'KernelStack: 16176 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9659084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200084 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.019 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:43.020 nr_hugepages=1024 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:43.020 resv_hugepages=0 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:43.020 surplus_hugepages=0 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:43.020 anon_hugepages=0 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74678252 kB' 'MemAvailable: 79567856 kB' 'Buffers: 10116 kB' 'Cached: 12889776 kB' 'SwapCached: 0 kB' 'Active: 9521828 kB' 'Inactive: 4039056 kB' 'Active(anon): 8328844 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664676 kB' 'Mapped: 175064 kB' 'Shmem: 7667852 kB' 'KReclaimable: 493500 kB' 'Slab: 1004740 kB' 'SReclaimable: 493500 kB' 'SUnreclaim: 511240 kB' 'KernelStack: 16096 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9663508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200052 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.020 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 34180276 kB' 'MemUsed: 13884660 kB' 'SwapCached: 0 kB' 'Active: 7283776 kB' 'Inactive: 3549192 kB' 'Active(anon): 6432664 kB' 'Inactive(anon): 0 kB' 'Active(file): 851112 kB' 'Inactive(file): 3549192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10604320 kB' 'Mapped: 132540 kB' 'AnonPages: 232084 kB' 'Shmem: 6204016 kB' 'KernelStack: 7144 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183112 kB' 'Slab: 454416 kB' 'SReclaimable: 183112 kB' 'SUnreclaim: 271304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.021 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:43.022 node0=1024 expecting 1024 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.022 00:03:43.022 real 0m6.715s 00:03:43.022 user 0m1.405s 00:03:43.022 sys 0m2.235s 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.022 18:54:00 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:43.022 ************************************ 00:03:43.022 END TEST single_node_setup 00:03:43.022 ************************************ 00:03:43.022 18:54:00 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:43.022 18:54:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.022 18:54:00 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.022 18:54:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.022 ************************************ 00:03:43.022 START TEST even_2G_alloc 00:03:43.022 ************************************ 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.022 18:54:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:46.311 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.311 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.311 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.311 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.312 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74695868 kB' 'MemAvailable: 79585464 kB' 'Buffers: 10116 kB' 'Cached: 12889916 kB' 'SwapCached: 0 kB' 'Active: 9524268 kB' 'Inactive: 4039056 kB' 'Active(anon): 8331284 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666536 kB' 'Mapped: 175868 kB' 'Shmem: 7667992 kB' 'KReclaimable: 493492 kB' 'Slab: 1004480 kB' 'SReclaimable: 493492 kB' 'SUnreclaim: 510988 kB' 'KernelStack: 16144 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9665516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200120 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.312 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74696440 kB' 'MemAvailable: 79586036 kB' 'Buffers: 10116 kB' 'Cached: 12889920 kB' 'SwapCached: 0 kB' 'Active: 9524500 kB' 'Inactive: 4039056 kB' 'Active(anon): 8331516 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666772 kB' 'Mapped: 175868 kB' 'Shmem: 7667996 kB' 'KReclaimable: 493492 kB' 'Slab: 1004456 kB' 'SReclaimable: 493492 kB' 'SUnreclaim: 510964 kB' 'KernelStack: 16112 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9665536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200088 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.313 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.314 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74696888 kB' 'MemAvailable: 79586484 kB' 'Buffers: 10116 kB' 'Cached: 12889920 kB' 'SwapCached: 0 kB' 'Active: 9524440 kB' 'Inactive: 4039056 kB' 'Active(anon): 8331456 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666680 kB' 'Mapped: 175868 kB' 'Shmem: 7667996 kB' 'KReclaimable: 493492 kB' 'Slab: 1004516 kB' 'SReclaimable: 493492 kB' 'SUnreclaim: 511024 kB' 'KernelStack: 16112 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9665556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200088 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.315 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:46.317 nr_hugepages=1024 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:46.317 resv_hugepages=0 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:46.317 surplus_hugepages=0 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:46.317 anon_hugepages=0 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74696888 kB' 'MemAvailable: 79586484 kB' 'Buffers: 10116 kB' 'Cached: 12889920 kB' 'SwapCached: 0 kB' 'Active: 9524588 kB' 'Inactive: 4039056 kB' 'Active(anon): 8331604 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666828 kB' 'Mapped: 175868 kB' 'Shmem: 7667996 kB' 'KReclaimable: 493492 kB' 'Slab: 1004516 kB' 'SReclaimable: 493492 kB' 'SUnreclaim: 511024 kB' 'KernelStack: 16112 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9665724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200104 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 35256164 kB' 'MemUsed: 12808772 kB' 'SwapCached: 0 kB' 'Active: 7288296 kB' 'Inactive: 3549192 kB' 'Active(anon): 6437184 kB' 'Inactive(anon): 0 kB' 'Active(file): 851112 kB' 'Inactive(file): 3549192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10604472 kB' 'Mapped: 132556 kB' 'AnonPages: 236200 kB' 'Shmem: 6204168 kB' 'KernelStack: 6904 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183104 kB' 'Slab: 454564 kB' 'SReclaimable: 183104 kB' 'SUnreclaim: 271460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220568 kB' 'MemFree: 39439968 kB' 'MemUsed: 4780600 kB' 'SwapCached: 0 kB' 'Active: 2236152 kB' 'Inactive: 489864 kB' 'Active(anon): 1894280 kB' 'Inactive(anon): 0 kB' 'Active(file): 341872 kB' 'Inactive(file): 489864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295636 kB' 'Mapped: 43312 kB' 'AnonPages: 430428 kB' 'Shmem: 1463900 kB' 'KernelStack: 9192 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 310388 kB' 'Slab: 549952 kB' 'SReclaimable: 310388 kB' 'SUnreclaim: 239564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:46.322 node0=512 expecting 512 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:46.322 node1=512 expecting 512 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:46.322 00:03:46.322 real 0m3.202s 00:03:46.322 user 0m1.231s 00:03:46.322 sys 0m2.016s 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.322 18:54:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.322 ************************************ 00:03:46.322 END TEST even_2G_alloc 00:03:46.322 ************************************ 00:03:46.322 18:54:03 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:46.322 18:54:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.322 18:54:03 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.322 18:54:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.322 ************************************ 00:03:46.322 START TEST odd_alloc 00:03:46.322 ************************************ 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.322 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:46.323 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:46.323 18:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:46.323 18:54:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.323 18:54:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:49.614 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:49.614 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.614 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.614 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74729380 kB' 'MemAvailable: 79619008 kB' 'Buffers: 10116 kB' 'Cached: 12890228 kB' 'SwapCached: 0 kB' 'Active: 9517824 kB' 'Inactive: 4039056 kB' 'Active(anon): 8324840 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659848 kB' 'Mapped: 174288 kB' 'Shmem: 7668304 kB' 'KReclaimable: 493524 kB' 'Slab: 1003820 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510296 kB' 'KernelStack: 15984 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9649824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200004 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.654 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74729632 kB' 'MemAvailable: 79619260 kB' 'Buffers: 10116 kB' 'Cached: 12890232 kB' 'SwapCached: 0 kB' 'Active: 9517508 kB' 'Inactive: 4039056 kB' 'Active(anon): 8324524 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659476 kB' 'Mapped: 174288 kB' 'Shmem: 7668308 kB' 'KReclaimable: 493524 kB' 'Slab: 1003820 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510296 kB' 'KernelStack: 15968 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9649840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199972 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.655 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.656 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74733300 kB' 'MemAvailable: 79622928 kB' 'Buffers: 10116 kB' 'Cached: 12890244 kB' 'SwapCached: 0 kB' 'Active: 9517992 kB' 'Inactive: 4039056 kB' 'Active(anon): 8325008 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659924 kB' 'Mapped: 174180 kB' 'Shmem: 7668320 kB' 'KReclaimable: 493524 kB' 'Slab: 1003804 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510280 kB' 'KernelStack: 15968 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9649860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199956 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.657 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.658 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:49.659 nr_hugepages=1025 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:49.659 resv_hugepages=0 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:49.659 surplus_hugepages=0 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:49.659 anon_hugepages=0 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74733300 kB' 'MemAvailable: 79622928 kB' 'Buffers: 10116 kB' 'Cached: 12890244 kB' 'SwapCached: 0 kB' 'Active: 9517488 kB' 'Inactive: 4039056 kB' 'Active(anon): 8324504 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659420 kB' 'Mapped: 174180 kB' 'Shmem: 7668320 kB' 'KReclaimable: 493524 kB' 'Slab: 1003804 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510280 kB' 'KernelStack: 15968 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481756 kB' 'Committed_AS: 9649884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199956 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.659 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:49.660 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 35258156 kB' 'MemUsed: 12806780 kB' 'SwapCached: 0 kB' 'Active: 7283696 kB' 'Inactive: 3549192 kB' 'Active(anon): 6432584 kB' 'Inactive(anon): 0 kB' 'Active(file): 851112 kB' 'Inactive(file): 3549192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10604808 kB' 'Mapped: 131808 kB' 'AnonPages: 231328 kB' 'Shmem: 6204504 kB' 'KernelStack: 6872 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183136 kB' 'Slab: 454260 kB' 'SReclaimable: 183136 kB' 'SUnreclaim: 271124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.661 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220568 kB' 'MemFree: 39476020 kB' 'MemUsed: 4744548 kB' 'SwapCached: 0 kB' 'Active: 2234040 kB' 'Inactive: 489864 kB' 'Active(anon): 1892168 kB' 'Inactive(anon): 0 kB' 'Active(file): 341872 kB' 'Inactive(file): 489864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295588 kB' 'Mapped: 42384 kB' 'AnonPages: 428340 kB' 'Shmem: 1463852 kB' 'KernelStack: 9064 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 310388 kB' 'Slab: 549536 kB' 'SReclaimable: 310388 kB' 'SUnreclaim: 239148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.662 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:49.663 node0=513 expecting 513 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:49.663 node1=512 expecting 512 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:49.663 00:03:49.663 real 0m2.997s 00:03:49.663 user 0m1.098s 00:03:49.663 sys 0m1.932s 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.663 18:54:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.663 ************************************ 00:03:49.663 END TEST odd_alloc 00:03:49.663 ************************************ 00:03:49.663 18:54:06 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:49.663 18:54:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.663 18:54:06 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.663 18:54:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.663 ************************************ 00:03:49.663 START TEST custom_alloc 00:03:49.663 ************************************ 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.664 18:54:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:52.196 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.196 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.196 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.196 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.196 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.197 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 73706576 kB' 'MemAvailable: 78596204 kB' 'Buffers: 10116 kB' 'Cached: 12890384 kB' 'SwapCached: 0 kB' 'Active: 9519008 kB' 'Inactive: 4039056 kB' 'Active(anon): 8326024 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660948 kB' 'Mapped: 174220 kB' 'Shmem: 7668460 kB' 'KReclaimable: 493524 kB' 'Slab: 1003536 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510012 kB' 'KernelStack: 16032 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9650360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199972 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 73708312 kB' 'MemAvailable: 78597940 kB' 'Buffers: 10116 kB' 'Cached: 12890384 kB' 'SwapCached: 0 kB' 'Active: 9517608 kB' 'Inactive: 4039056 kB' 'Active(anon): 8324624 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659504 kB' 'Mapped: 174212 kB' 'Shmem: 7668460 kB' 'KReclaimable: 493524 kB' 'Slab: 1003564 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510040 kB' 'KernelStack: 15952 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9650376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199908 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 73709108 kB' 'MemAvailable: 78598736 kB' 'Buffers: 10116 kB' 'Cached: 12890384 kB' 'SwapCached: 0 kB' 'Active: 9518268 kB' 'Inactive: 4039056 kB' 'Active(anon): 8325284 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660172 kB' 'Mapped: 174212 kB' 'Shmem: 7668460 kB' 'KReclaimable: 493524 kB' 'Slab: 1003564 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510040 kB' 'KernelStack: 15968 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9650400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199908 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.464 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:52.466 nr_hugepages=1536 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:52.466 resv_hugepages=0 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:52.466 surplus_hugepages=0 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:52.466 anon_hugepages=0 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 73709636 kB' 'MemAvailable: 78599264 kB' 'Buffers: 10116 kB' 'Cached: 12890392 kB' 'SwapCached: 0 kB' 'Active: 9518564 kB' 'Inactive: 4039056 kB' 'Active(anon): 8325580 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660456 kB' 'Mapped: 174212 kB' 'Shmem: 7668468 kB' 'KReclaimable: 493524 kB' 'Slab: 1003564 kB' 'SReclaimable: 493524 kB' 'SUnreclaim: 510040 kB' 'KernelStack: 15984 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958492 kB' 'Committed_AS: 9650420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199908 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 35278068 kB' 'MemUsed: 12786868 kB' 'SwapCached: 0 kB' 'Active: 7283712 kB' 'Inactive: 3549192 kB' 'Active(anon): 6432600 kB' 'Inactive(anon): 0 kB' 'Active(file): 851112 kB' 'Inactive(file): 3549192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10604968 kB' 'Mapped: 131808 kB' 'AnonPages: 231260 kB' 'Shmem: 6204664 kB' 'KernelStack: 6856 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183136 kB' 'Slab: 454072 kB' 'SReclaimable: 183136 kB' 'SUnreclaim: 270936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.468 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.469 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220568 kB' 'MemFree: 38432684 kB' 'MemUsed: 5787884 kB' 'SwapCached: 0 kB' 'Active: 2234336 kB' 'Inactive: 489864 kB' 'Active(anon): 1892464 kB' 'Inactive(anon): 0 kB' 'Active(file): 341872 kB' 'Inactive(file): 489864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295596 kB' 'Mapped: 42404 kB' 'AnonPages: 428656 kB' 'Shmem: 1463860 kB' 'KernelStack: 9112 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 310388 kB' 'Slab: 549492 kB' 'SReclaimable: 310388 kB' 'SUnreclaim: 239104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:52.471 node0=512 expecting 512 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:52.471 node1=1024 expecting 1024 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:52.471 00:03:52.471 real 0m3.113s 00:03:52.471 user 0m1.211s 00:03:52.471 sys 0m1.976s 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.471 18:54:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.471 ************************************ 00:03:52.471 END TEST custom_alloc 00:03:52.471 ************************************ 00:03:52.730 18:54:09 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:52.730 18:54:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.730 18:54:09 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.730 18:54:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.730 ************************************ 00:03:52.730 START TEST no_shrink_alloc 00:03:52.730 ************************************ 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.730 18:54:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:55.261 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.261 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.261 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.522 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.522 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74763808 kB' 'MemAvailable: 79653396 kB' 'Buffers: 10116 kB' 'Cached: 12890524 kB' 'SwapCached: 0 kB' 'Active: 9518936 kB' 'Inactive: 4039056 kB' 'Active(anon): 8325952 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660632 kB' 'Mapped: 174232 kB' 'Shmem: 7668600 kB' 'KReclaimable: 493484 kB' 'Slab: 1004044 kB' 'SReclaimable: 493484 kB' 'SUnreclaim: 510560 kB' 'KernelStack: 15984 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9650764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200084 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.523 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74765124 kB' 'MemAvailable: 79654584 kB' 'Buffers: 10116 kB' 'Cached: 12890528 kB' 'SwapCached: 0 kB' 'Active: 9518612 kB' 'Inactive: 4039056 kB' 'Active(anon): 8325628 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660384 kB' 'Mapped: 174224 kB' 'Shmem: 7668604 kB' 'KReclaimable: 493356 kB' 'Slab: 1003924 kB' 'SReclaimable: 493356 kB' 'SUnreclaim: 510568 kB' 'KernelStack: 16016 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9650412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200004 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.524 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.525 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.526 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74765512 kB' 'MemAvailable: 79654972 kB' 'Buffers: 10116 kB' 'Cached: 12890548 kB' 'SwapCached: 0 kB' 'Active: 9517928 kB' 'Inactive: 4039056 kB' 'Active(anon): 8324944 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659636 kB' 'Mapped: 174224 kB' 'Shmem: 7668624 kB' 'KReclaimable: 493356 kB' 'Slab: 1003924 kB' 'SReclaimable: 493356 kB' 'SUnreclaim: 510568 kB' 'KernelStack: 15936 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9650436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199988 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.791 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.792 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:55.793 nr_hugepages=1024 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:55.793 resv_hugepages=0 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:55.793 surplus_hugepages=0 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:55.793 anon_hugepages=0 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74766016 kB' 'MemAvailable: 79655476 kB' 'Buffers: 10116 kB' 'Cached: 12890548 kB' 'SwapCached: 0 kB' 'Active: 9517928 kB' 'Inactive: 4039056 kB' 'Active(anon): 8324944 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659636 kB' 'Mapped: 174224 kB' 'Shmem: 7668624 kB' 'KReclaimable: 493356 kB' 'Slab: 1003924 kB' 'SReclaimable: 493356 kB' 'SUnreclaim: 510568 kB' 'KernelStack: 15936 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9650460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199988 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 34241632 kB' 'MemUsed: 13823304 kB' 'SwapCached: 0 kB' 'Active: 7283676 kB' 'Inactive: 3549192 kB' 'Active(anon): 6432564 kB' 'Inactive(anon): 0 kB' 'Active(file): 851112 kB' 'Inactive(file): 3549192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10605044 kB' 'Mapped: 131808 kB' 'AnonPages: 231052 kB' 'Shmem: 6204740 kB' 'KernelStack: 6840 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182968 kB' 'Slab: 454444 kB' 'SReclaimable: 182968 kB' 'SUnreclaim: 271476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:55.796 node0=1024 expecting 1024 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:03:55.796 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.797 18:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:59.092 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.092 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.092 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.092 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74755584 kB' 'MemAvailable: 79645044 kB' 'Buffers: 10116 kB' 'Cached: 12890656 kB' 'SwapCached: 0 kB' 'Active: 9519972 kB' 'Inactive: 4039056 kB' 'Active(anon): 8326988 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661492 kB' 'Mapped: 174252 kB' 'Shmem: 7668732 kB' 'KReclaimable: 493356 kB' 'Slab: 1003432 kB' 'SReclaimable: 493356 kB' 'SUnreclaim: 510076 kB' 'KernelStack: 15952 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9650788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200004 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.092 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.093 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74756180 kB' 'MemAvailable: 79645640 kB' 'Buffers: 10116 kB' 'Cached: 12890664 kB' 'SwapCached: 0 kB' 'Active: 9519476 kB' 'Inactive: 4039056 kB' 'Active(anon): 8326492 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661024 kB' 'Mapped: 174232 kB' 'Shmem: 7668740 kB' 'KReclaimable: 493356 kB' 'Slab: 1003480 kB' 'SReclaimable: 493356 kB' 'SUnreclaim: 510124 kB' 'KernelStack: 15952 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9650940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199940 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.094 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.095 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74757468 kB' 'MemAvailable: 79646928 kB' 'Buffers: 10116 kB' 'Cached: 12890688 kB' 'SwapCached: 0 kB' 'Active: 9519468 kB' 'Inactive: 4039056 kB' 'Active(anon): 8326484 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661100 kB' 'Mapped: 174232 kB' 'Shmem: 7668764 kB' 'KReclaimable: 493356 kB' 'Slab: 1003480 kB' 'SReclaimable: 493356 kB' 'SUnreclaim: 510124 kB' 'KernelStack: 15968 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9651332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199940 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.096 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.097 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:59.098 nr_hugepages=1024 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:59.098 resv_hugepages=0 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:59.098 surplus_hugepages=0 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:59.098 anon_hugepages=0 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285504 kB' 'MemFree: 74758164 kB' 'MemAvailable: 79647624 kB' 'Buffers: 10116 kB' 'Cached: 12890712 kB' 'SwapCached: 0 kB' 'Active: 9519500 kB' 'Inactive: 4039056 kB' 'Active(anon): 8326516 kB' 'Inactive(anon): 0 kB' 'Active(file): 1192984 kB' 'Inactive(file): 4039056 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661096 kB' 'Mapped: 174232 kB' 'Shmem: 7668788 kB' 'KReclaimable: 493356 kB' 'Slab: 1003480 kB' 'SReclaimable: 493356 kB' 'SUnreclaim: 510124 kB' 'KernelStack: 15968 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482780 kB' 'Committed_AS: 9651356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199940 kB' 'VmallocChunk: 0 kB' 'Percpu: 65376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 419328 kB' 'DirectMap2M: 7645184 kB' 'DirectMap1G: 94371840 kB' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.098 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:59.099 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064936 kB' 'MemFree: 34240008 kB' 'MemUsed: 13824928 kB' 'SwapCached: 0 kB' 'Active: 7283720 kB' 'Inactive: 3549192 kB' 'Active(anon): 6432608 kB' 'Inactive(anon): 0 kB' 'Active(file): 851112 kB' 'Inactive(file): 3549192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10605064 kB' 'Mapped: 131808 kB' 'AnonPages: 230976 kB' 'Shmem: 6204760 kB' 'KernelStack: 6808 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182968 kB' 'Slab: 454236 kB' 'SReclaimable: 182968 kB' 'SUnreclaim: 271268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.100 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:59.101 node0=1024 expecting 1024 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.101 00:03:59.101 real 0m6.452s 00:03:59.101 user 0m2.522s 00:03:59.101 sys 0m4.075s 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.101 18:54:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.101 ************************************ 00:03:59.101 END TEST no_shrink_alloc 00:03:59.101 ************************************ 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:59.101 18:54:16 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:59.101 00:03:59.101 real 0m23.133s 00:03:59.101 user 0m7.739s 00:03:59.101 sys 0m12.667s 00:03:59.101 18:54:16 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.101 18:54:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.101 ************************************ 00:03:59.101 END TEST hugepages 00:03:59.101 ************************************ 00:03:59.101 18:54:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:59.101 18:54:16 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.101 18:54:16 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.101 18:54:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.360 ************************************ 00:03:59.360 START TEST driver 00:03:59.360 ************************************ 00:03:59.360 18:54:16 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:59.360 * Looking for test storage... 00:03:59.360 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:59.360 18:54:16 setup.sh.driver -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.360 18:54:16 setup.sh.driver -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.360 18:54:16 setup.sh.driver -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.360 18:54:16 setup.sh.driver -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.360 18:54:16 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.361 18:54:16 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:03:59.361 18:54:16 setup.sh.driver -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.361 18:54:16 setup.sh.driver -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.361 --rc genhtml_branch_coverage=1 00:03:59.361 --rc genhtml_function_coverage=1 00:03:59.361 --rc genhtml_legend=1 00:03:59.361 --rc geninfo_all_blocks=1 00:03:59.361 --rc geninfo_unexecuted_blocks=1 00:03:59.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.361 ' 00:03:59.361 18:54:16 setup.sh.driver -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.361 --rc genhtml_branch_coverage=1 00:03:59.361 --rc genhtml_function_coverage=1 00:03:59.361 --rc genhtml_legend=1 00:03:59.361 --rc geninfo_all_blocks=1 00:03:59.361 --rc geninfo_unexecuted_blocks=1 00:03:59.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.361 ' 00:03:59.361 18:54:16 setup.sh.driver -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.361 --rc genhtml_branch_coverage=1 00:03:59.361 --rc genhtml_function_coverage=1 00:03:59.361 --rc genhtml_legend=1 00:03:59.361 --rc geninfo_all_blocks=1 00:03:59.361 --rc geninfo_unexecuted_blocks=1 00:03:59.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.361 ' 00:03:59.361 18:54:16 setup.sh.driver -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.361 --rc genhtml_branch_coverage=1 00:03:59.361 --rc genhtml_function_coverage=1 00:03:59.361 --rc genhtml_legend=1 00:03:59.361 --rc geninfo_all_blocks=1 00:03:59.361 --rc geninfo_unexecuted_blocks=1 00:03:59.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.361 ' 00:03:59.361 18:54:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:59.361 18:54:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.361 18:54:16 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.631 18:54:20 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:04.631 18:54:20 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.631 18:54:20 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.631 18:54:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.631 ************************************ 00:04:04.631 START TEST guess_driver 00:04:04.631 ************************************ 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 160 > 0 )) 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:04.631 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.631 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.631 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.631 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.631 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:04.631 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:04.631 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:04.631 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:04.632 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:04.632 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:04.632 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:04.632 Looking for driver=vfio-pci 00:04:04.632 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.632 18:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:04.632 18:54:20 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.632 18:54:20 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.167 18:54:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.167 18:54:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.167 18:54:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.460 18:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.460 18:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.460 18:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.460 18:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:10.460 18:54:27 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:10.460 18:54:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.460 18:54:27 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.653 00:04:14.653 real 0m10.710s 00:04:14.653 user 0m2.446s 00:04:14.653 sys 0m4.542s 00:04:14.653 18:54:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.653 18:54:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.653 ************************************ 00:04:14.653 END TEST guess_driver 00:04:14.653 ************************************ 00:04:14.653 00:04:14.653 real 0m15.371s 00:04:14.653 user 0m3.830s 00:04:14.653 sys 0m7.157s 00:04:14.653 18:54:31 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.653 18:54:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.653 ************************************ 00:04:14.653 END TEST driver 00:04:14.653 ************************************ 00:04:14.653 18:54:31 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:14.653 18:54:31 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.653 18:54:31 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.653 18:54:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.653 ************************************ 00:04:14.653 START TEST devices 00:04:14.653 ************************************ 00:04:14.653 18:54:31 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:14.653 * Looking for test storage... 00:04:14.913 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.913 18:54:31 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.913 --rc genhtml_branch_coverage=1 00:04:14.913 --rc genhtml_function_coverage=1 00:04:14.913 --rc genhtml_legend=1 00:04:14.913 --rc geninfo_all_blocks=1 00:04:14.913 --rc geninfo_unexecuted_blocks=1 00:04:14.913 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.913 ' 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.913 --rc genhtml_branch_coverage=1 00:04:14.913 --rc genhtml_function_coverage=1 00:04:14.913 --rc genhtml_legend=1 00:04:14.913 --rc geninfo_all_blocks=1 00:04:14.913 --rc geninfo_unexecuted_blocks=1 00:04:14.913 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.913 ' 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.913 --rc genhtml_branch_coverage=1 00:04:14.913 --rc genhtml_function_coverage=1 00:04:14.913 --rc genhtml_legend=1 00:04:14.913 --rc geninfo_all_blocks=1 00:04:14.913 --rc geninfo_unexecuted_blocks=1 00:04:14.913 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.913 ' 00:04:14.913 18:54:31 setup.sh.devices -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.914 --rc genhtml_branch_coverage=1 00:04:14.914 --rc genhtml_function_coverage=1 00:04:14.914 --rc genhtml_legend=1 00:04:14.914 --rc geninfo_all_blocks=1 00:04:14.914 --rc geninfo_unexecuted_blocks=1 00:04:14.914 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.914 ' 00:04:14.914 18:54:31 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:14.914 18:54:31 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:14.914 18:54:31 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.914 18:54:31 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:19.111 18:54:35 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.111 No valid GPT data, bailing 00:04:19.111 18:54:35 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:19.111 18:54:35 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.111 18:54:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.111 18:54:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.111 18:54:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.111 ************************************ 00:04:19.111 START TEST nvme_mount 00:04:19.111 ************************************ 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.111 18:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:19.371 Creating new GPT entries in memory. 00:04:19.371 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:19.371 other utilities. 00:04:19.371 18:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:19.371 18:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.371 18:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.371 18:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.371 18:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:20.750 Creating new GPT entries in memory. 00:04:20.750 The operation has completed successfully. 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2711662 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.750 18:54:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.046 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.046 18:54:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.046 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.046 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.046 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.046 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.046 18:54:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.337 18:54:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.337 18:54:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:30.054 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.313 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.313 00:04:30.313 real 0m11.793s 00:04:30.313 user 0m3.418s 00:04:30.313 sys 0m6.300s 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.313 18:54:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:30.313 ************************************ 00:04:30.313 END TEST nvme_mount 00:04:30.313 ************************************ 00:04:30.313 18:54:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:30.313 18:54:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.313 18:54:47 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.313 18:54:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:30.313 ************************************ 00:04:30.313 START TEST dm_mount 00:04:30.313 ************************************ 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.313 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:30.314 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.314 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.314 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:30.314 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.314 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:30.314 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:30.314 18:54:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:31.250 Creating new GPT entries in memory. 00:04:31.250 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:31.250 other utilities. 00:04:31.250 18:54:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:31.250 18:54:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.250 18:54:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.250 18:54:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.250 18:54:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:32.632 Creating new GPT entries in memory. 00:04:32.632 The operation has completed successfully. 00:04:32.632 18:54:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:32.632 18:54:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.632 18:54:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.632 18:54:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.632 18:54:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:33.568 The operation has completed successfully. 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2715314 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.569 18:54:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.858 18:54:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:40.144 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:40.144 00:04:40.144 real 0m9.522s 00:04:40.144 user 0m2.338s 00:04:40.144 sys 0m4.255s 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.144 18:54:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:40.144 ************************************ 00:04:40.144 END TEST dm_mount 00:04:40.144 ************************************ 00:04:40.144 18:54:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:40.144 18:54:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:40.144 18:54:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.144 18:54:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.144 18:54:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:40.144 18:54:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.144 18:54:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.144 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:40.144 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:40.144 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:40.144 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:40.144 18:54:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:40.144 18:54:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:40.145 18:54:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:40.145 18:54:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.145 18:54:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:40.145 18:54:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.145 18:54:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:40.145 00:04:40.145 real 0m25.498s 00:04:40.145 user 0m7.197s 00:04:40.145 sys 0m13.206s 00:04:40.145 18:54:57 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.145 18:54:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.145 ************************************ 00:04:40.145 END TEST devices 00:04:40.145 ************************************ 00:04:40.145 00:04:40.145 real 1m28.883s 00:04:40.145 user 0m26.141s 00:04:40.145 sys 0m46.915s 00:04:40.145 18:54:57 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.145 18:54:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:40.145 ************************************ 00:04:40.145 END TEST setup.sh 00:04:40.145 ************************************ 00:04:40.145 18:54:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:43.429 Hugepages 00:04:43.429 node hugesize free / total 00:04:43.429 node0 1048576kB 0 / 0 00:04:43.429 node0 2048kB 1024 / 1024 00:04:43.429 node1 1048576kB 0 / 0 00:04:43.429 node1 2048kB 1024 / 1024 00:04:43.429 00:04:43.429 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.429 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:43.429 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:43.429 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:43.429 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:43.429 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:43.429 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:43.429 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:43.429 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:43.429 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:43.429 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:43.429 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:43.429 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:43.429 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:43.429 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:43.429 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:43.429 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:43.429 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:43.429 18:55:00 -- spdk/autotest.sh@117 -- # uname -s 00:04:43.429 18:55:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:43.429 18:55:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:43.429 18:55:00 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:46.711 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.711 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.998 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.998 18:55:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:50.933 18:55:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:50.933 18:55:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:50.933 18:55:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.933 18:55:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:50.933 18:55:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:50.933 18:55:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:50.933 18:55:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.933 18:55:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.933 18:55:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:51.192 18:55:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:51.192 18:55:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:51.192 18:55:08 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.488 Waiting for block devices as requested 00:04:54.488 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:54.488 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:54.488 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:54.488 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:54.488 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:54.488 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:54.747 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:54.747 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:54.747 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:54.747 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:55.005 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:55.005 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:55.006 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:55.263 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:55.263 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:55.263 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:55.522 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:55.522 18:55:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:55.522 18:55:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:55.522 18:55:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:55.522 18:55:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:55.522 18:55:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:55.522 18:55:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:55.522 18:55:12 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:55.522 18:55:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:55.522 18:55:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:55.522 18:55:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:55.522 18:55:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:55.522 18:55:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:55.522 18:55:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:55.522 18:55:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:55.522 18:55:12 -- common/autotest_common.sh@1543 -- # continue 00:04:55.522 18:55:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:55.522 18:55:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.522 18:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 18:55:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:55.780 18:55:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.780 18:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 18:55:12 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:59.068 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.068 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:02.356 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.356 18:55:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:02.356 18:55:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.356 18:55:19 -- common/autotest_common.sh@10 -- # set +x 00:05:02.356 18:55:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:02.356 18:55:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:02.356 18:55:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.356 18:55:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:02.356 18:55:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:02.356 18:55:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:02.356 18:55:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:02.356 18:55:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:02.356 18:55:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:02.356 18:55:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:02.356 18:55:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.356 18:55:19 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.356 18:55:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:02.356 18:55:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:02.356 18:55:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:02.356 18:55:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:02.356 18:55:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:02.356 18:55:19 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:02.356 18:55:19 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:02.356 18:55:19 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:02.356 18:55:19 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:02.356 18:55:19 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:05:02.356 18:55:19 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:02.356 18:55:19 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2723511 00:05:02.356 18:55:19 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.356 18:55:19 -- common/autotest_common.sh@1585 -- # waitforlisten 2723511 00:05:02.356 18:55:19 -- common/autotest_common.sh@835 -- # '[' -z 2723511 ']' 00:05:02.356 18:55:19 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.357 18:55:19 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.357 18:55:19 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.357 18:55:19 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.357 18:55:19 -- common/autotest_common.sh@10 -- # set +x 00:05:02.357 [2024-11-26 18:55:19.431944] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:02.357 [2024-11-26 18:55:19.431998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723511 ] 00:05:02.357 [2024-11-26 18:55:19.502271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.357 [2024-11-26 18:55:19.551191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.615 18:55:19 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.615 18:55:19 -- common/autotest_common.sh@868 -- # return 0 00:05:02.615 18:55:19 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:02.615 18:55:19 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:02.615 18:55:19 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:05.901 nvme0n1 00:05:05.902 18:55:22 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:05.902 [2024-11-26 18:55:22.930215] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:05.902 request: 00:05:05.902 { 00:05:05.902 "nvme_ctrlr_name": "nvme0", 00:05:05.902 "password": "test", 00:05:05.902 "method": "bdev_nvme_opal_revert", 00:05:05.902 "req_id": 1 00:05:05.902 } 00:05:05.902 Got JSON-RPC error response 00:05:05.902 response: 00:05:05.902 { 00:05:05.902 "code": -32602, 00:05:05.902 "message": "Invalid parameters" 00:05:05.902 } 00:05:05.902 18:55:22 -- common/autotest_common.sh@1591 -- # true 00:05:05.902 18:55:22 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:05.902 18:55:22 -- common/autotest_common.sh@1595 -- # killprocess 2723511 00:05:05.902 18:55:22 -- common/autotest_common.sh@954 -- # '[' -z 2723511 ']' 00:05:05.902 18:55:22 -- common/autotest_common.sh@958 -- # kill -0 2723511 00:05:05.902 18:55:22 -- common/autotest_common.sh@959 -- # uname 00:05:05.902 18:55:22 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.902 18:55:22 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2723511 00:05:05.902 18:55:22 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.902 18:55:22 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.902 18:55:22 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2723511' 00:05:05.902 killing process with pid 2723511 00:05:05.902 18:55:22 -- common/autotest_common.sh@973 -- # kill 2723511 00:05:05.902 18:55:22 -- common/autotest_common.sh@978 -- # wait 2723511 00:05:10.087 18:55:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:10.087 18:55:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:10.087 18:55:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.087 18:55:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.087 18:55:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:10.087 18:55:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.087 18:55:26 -- common/autotest_common.sh@10 -- # set +x 00:05:10.087 18:55:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:10.087 18:55:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:10.087 18:55:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.087 18:55:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.087 18:55:26 -- common/autotest_common.sh@10 -- # set +x 00:05:10.087 ************************************ 00:05:10.087 START TEST env 00:05:10.087 ************************************ 00:05:10.087 18:55:26 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:10.087 * Looking for test storage... 00:05:10.087 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.087 18:55:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.087 18:55:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.087 18:55:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.087 18:55:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.087 18:55:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.087 18:55:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.087 18:55:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.087 18:55:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.087 18:55:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.087 18:55:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.087 18:55:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.087 18:55:27 env -- scripts/common.sh@344 -- # case "$op" in 00:05:10.087 18:55:27 env -- scripts/common.sh@345 -- # : 1 00:05:10.087 18:55:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.087 18:55:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.087 18:55:27 env -- scripts/common.sh@365 -- # decimal 1 00:05:10.087 18:55:27 env -- scripts/common.sh@353 -- # local d=1 00:05:10.087 18:55:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.087 18:55:27 env -- scripts/common.sh@355 -- # echo 1 00:05:10.087 18:55:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.087 18:55:27 env -- scripts/common.sh@366 -- # decimal 2 00:05:10.087 18:55:27 env -- scripts/common.sh@353 -- # local d=2 00:05:10.087 18:55:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.087 18:55:27 env -- scripts/common.sh@355 -- # echo 2 00:05:10.087 18:55:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.087 18:55:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.087 18:55:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.087 18:55:27 env -- scripts/common.sh@368 -- # return 0 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.087 --rc genhtml_branch_coverage=1 00:05:10.087 --rc genhtml_function_coverage=1 00:05:10.087 --rc genhtml_legend=1 00:05:10.087 --rc geninfo_all_blocks=1 00:05:10.087 --rc geninfo_unexecuted_blocks=1 00:05:10.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.087 ' 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.087 --rc genhtml_branch_coverage=1 00:05:10.087 --rc genhtml_function_coverage=1 00:05:10.087 --rc genhtml_legend=1 00:05:10.087 --rc geninfo_all_blocks=1 00:05:10.087 --rc geninfo_unexecuted_blocks=1 00:05:10.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.087 ' 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.087 --rc genhtml_branch_coverage=1 00:05:10.087 --rc genhtml_function_coverage=1 00:05:10.087 --rc genhtml_legend=1 00:05:10.087 --rc geninfo_all_blocks=1 00:05:10.087 --rc geninfo_unexecuted_blocks=1 00:05:10.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.087 ' 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.087 --rc genhtml_branch_coverage=1 00:05:10.087 --rc genhtml_function_coverage=1 00:05:10.087 --rc genhtml_legend=1 00:05:10.087 --rc geninfo_all_blocks=1 00:05:10.087 --rc geninfo_unexecuted_blocks=1 00:05:10.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.087 ' 00:05:10.087 18:55:27 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.087 18:55:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.087 18:55:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.087 ************************************ 00:05:10.087 START TEST env_memory 00:05:10.087 ************************************ 00:05:10.087 18:55:27 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.087 00:05:10.087 00:05:10.087 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.087 http://cunit.sourceforge.net/ 00:05:10.087 00:05:10.087 00:05:10.087 Suite: memory 00:05:10.087 Test: alloc and free memory map ...[2024-11-26 18:55:27.228863] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.087 passed 00:05:10.087 Test: mem map translation ...[2024-11-26 18:55:27.242129] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.087 [2024-11-26 18:55:27.242146] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.087 [2024-11-26 18:55:27.242178] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.087 [2024-11-26 18:55:27.242187] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.087 passed 00:05:10.087 Test: mem map registration ...[2024-11-26 18:55:27.264127] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:10.087 [2024-11-26 18:55:27.264143] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:10.087 passed 00:05:10.087 Test: mem map adjacent registrations ...passed 00:05:10.087 00:05:10.087 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.087 suites 1 1 n/a 0 0 00:05:10.087 tests 4 4 4 0 0 00:05:10.087 asserts 152 152 152 0 n/a 00:05:10.087 00:05:10.087 Elapsed time = 0.085 seconds 00:05:10.087 00:05:10.087 real 0m0.094s 00:05:10.087 user 0m0.084s 00:05:10.087 sys 0m0.009s 00:05:10.087 18:55:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.087 18:55:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:10.087 ************************************ 00:05:10.087 END TEST env_memory 00:05:10.088 ************************************ 00:05:10.346 18:55:27 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.346 18:55:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.346 18:55:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.346 18:55:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.346 ************************************ 00:05:10.346 START TEST env_vtophys 00:05:10.346 ************************************ 00:05:10.346 18:55:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.346 EAL: lib.eal log level changed from notice to debug 00:05:10.346 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.346 EAL: Detected lcore 1 as core 1 on socket 0 00:05:10.346 EAL: Detected lcore 2 as core 2 on socket 0 00:05:10.346 EAL: Detected lcore 3 as core 3 on socket 0 00:05:10.346 EAL: Detected lcore 4 as core 4 on socket 0 00:05:10.346 EAL: Detected lcore 5 as core 8 on socket 0 00:05:10.346 EAL: Detected lcore 6 as core 9 on socket 0 00:05:10.347 EAL: Detected lcore 7 as core 10 on socket 0 00:05:10.347 EAL: Detected lcore 8 as core 11 on socket 0 00:05:10.347 EAL: Detected lcore 9 as core 16 on socket 0 00:05:10.347 EAL: Detected lcore 10 as core 17 on socket 0 00:05:10.347 EAL: Detected lcore 11 as core 18 on socket 0 00:05:10.347 EAL: Detected lcore 12 as core 19 on socket 0 00:05:10.347 EAL: Detected lcore 13 as core 20 on socket 0 00:05:10.347 EAL: Detected lcore 14 as core 24 on socket 0 00:05:10.347 EAL: Detected lcore 15 as core 25 on socket 0 00:05:10.347 EAL: Detected lcore 16 as core 26 on socket 0 00:05:10.347 EAL: Detected lcore 17 as core 27 on socket 0 00:05:10.347 EAL: Detected lcore 18 as core 0 on socket 1 00:05:10.347 EAL: Detected lcore 19 as core 1 on socket 1 00:05:10.347 EAL: Detected lcore 20 as core 2 on socket 1 00:05:10.347 EAL: Detected lcore 21 as core 3 on socket 1 00:05:10.347 EAL: Detected lcore 22 as core 4 on socket 1 00:05:10.347 EAL: Detected lcore 23 as core 8 on socket 1 00:05:10.347 EAL: Detected lcore 24 as core 9 on socket 1 00:05:10.347 EAL: Detected lcore 25 as core 10 on socket 1 00:05:10.347 EAL: Detected lcore 26 as core 11 on socket 1 00:05:10.347 EAL: Detected lcore 27 as core 16 on socket 1 00:05:10.347 EAL: Detected lcore 28 as core 17 on socket 1 00:05:10.347 EAL: Detected lcore 29 as core 18 on socket 1 00:05:10.347 EAL: Detected lcore 30 as core 19 on socket 1 00:05:10.347 EAL: Detected lcore 31 as core 20 on socket 1 00:05:10.347 EAL: Detected lcore 32 as core 24 on socket 1 00:05:10.347 EAL: Detected lcore 33 as core 25 on socket 1 00:05:10.347 EAL: Detected lcore 34 as core 26 on socket 1 00:05:10.347 EAL: Detected lcore 35 as core 27 on socket 1 00:05:10.347 EAL: Detected lcore 36 as core 0 on socket 0 00:05:10.347 EAL: Detected lcore 37 as core 1 on socket 0 00:05:10.347 EAL: Detected lcore 38 as core 2 on socket 0 00:05:10.347 EAL: Detected lcore 39 as core 3 on socket 0 00:05:10.347 EAL: Detected lcore 40 as core 4 on socket 0 00:05:10.347 EAL: Detected lcore 41 as core 8 on socket 0 00:05:10.347 EAL: Detected lcore 42 as core 9 on socket 0 00:05:10.347 EAL: Detected lcore 43 as core 10 on socket 0 00:05:10.347 EAL: Detected lcore 44 as core 11 on socket 0 00:05:10.347 EAL: Detected lcore 45 as core 16 on socket 0 00:05:10.347 EAL: Detected lcore 46 as core 17 on socket 0 00:05:10.347 EAL: Detected lcore 47 as core 18 on socket 0 00:05:10.347 EAL: Detected lcore 48 as core 19 on socket 0 00:05:10.347 EAL: Detected lcore 49 as core 20 on socket 0 00:05:10.347 EAL: Detected lcore 50 as core 24 on socket 0 00:05:10.347 EAL: Detected lcore 51 as core 25 on socket 0 00:05:10.347 EAL: Detected lcore 52 as core 26 on socket 0 00:05:10.347 EAL: Detected lcore 53 as core 27 on socket 0 00:05:10.347 EAL: Detected lcore 54 as core 0 on socket 1 00:05:10.347 EAL: Detected lcore 55 as core 1 on socket 1 00:05:10.347 EAL: Detected lcore 56 as core 2 on socket 1 00:05:10.347 EAL: Detected lcore 57 as core 3 on socket 1 00:05:10.347 EAL: Detected lcore 58 as core 4 on socket 1 00:05:10.347 EAL: Detected lcore 59 as core 8 on socket 1 00:05:10.347 EAL: Detected lcore 60 as core 9 on socket 1 00:05:10.347 EAL: Detected lcore 61 as core 10 on socket 1 00:05:10.347 EAL: Detected lcore 62 as core 11 on socket 1 00:05:10.347 EAL: Detected lcore 63 as core 16 on socket 1 00:05:10.347 EAL: Detected lcore 64 as core 17 on socket 1 00:05:10.347 EAL: Detected lcore 65 as core 18 on socket 1 00:05:10.347 EAL: Detected lcore 66 as core 19 on socket 1 00:05:10.347 EAL: Detected lcore 67 as core 20 on socket 1 00:05:10.347 EAL: Detected lcore 68 as core 24 on socket 1 00:05:10.347 EAL: Detected lcore 69 as core 25 on socket 1 00:05:10.347 EAL: Detected lcore 70 as core 26 on socket 1 00:05:10.347 EAL: Detected lcore 71 as core 27 on socket 1 00:05:10.347 EAL: Maximum logical cores by configuration: 128 00:05:10.347 EAL: Detected CPU lcores: 72 00:05:10.347 EAL: Detected NUMA nodes: 2 00:05:10.347 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:10.347 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:10.347 EAL: Checking presence of .so 'librte_eal.so' 00:05:10.347 EAL: Detected static linkage of DPDK 00:05:10.347 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.347 EAL: Bus pci wants IOVA as 'DC' 00:05:10.347 EAL: Buses did not request a specific IOVA mode. 00:05:10.347 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:10.347 EAL: Selected IOVA mode 'VA' 00:05:10.347 EAL: Probing VFIO support... 00:05:10.347 EAL: IOMMU type 1 (Type 1) is supported 00:05:10.347 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:10.347 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:10.347 EAL: VFIO support initialized 00:05:10.347 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.347 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.347 EAL: Setting up physically contiguous memory... 00:05:10.347 EAL: Setting maximum number of open files to 524288 00:05:10.347 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.347 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:10.347 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.347 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:10.347 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.347 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:10.347 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.347 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.347 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:10.347 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:10.347 EAL: Hugepages will be freed exactly as allocated. 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: TSC frequency is ~2300000 KHz 00:05:10.347 EAL: Main lcore 0 is ready (tid=7fc3b53d6a00;cpuset=[0]) 00:05:10.347 EAL: Trying to obtain current memory policy. 00:05:10.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.347 EAL: Restoring previous memory policy: 0 00:05:10.347 EAL: request: mp_malloc_sync 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.347 00:05:10.347 00:05:10.347 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.347 http://cunit.sourceforge.net/ 00:05:10.347 00:05:10.347 00:05:10.347 Suite: components_suite 00:05:10.347 Test: vtophys_malloc_test ...passed 00:05:10.347 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.347 EAL: Restoring previous memory policy: 4 00:05:10.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.347 EAL: request: mp_malloc_sync 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.347 EAL: request: mp_malloc_sync 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.347 EAL: Trying to obtain current memory policy. 00:05:10.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.347 EAL: Restoring previous memory policy: 4 00:05:10.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.347 EAL: request: mp_malloc_sync 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.347 EAL: request: mp_malloc_sync 00:05:10.347 EAL: No shared files mode enabled, IPC is disabled 00:05:10.347 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.347 EAL: Trying to obtain current memory policy. 00:05:10.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.348 EAL: Restoring previous memory policy: 4 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.348 EAL: Trying to obtain current memory policy. 00:05:10.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.348 EAL: Restoring previous memory policy: 4 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.348 EAL: Trying to obtain current memory policy. 00:05:10.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.348 EAL: Restoring previous memory policy: 4 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.348 EAL: Trying to obtain current memory policy. 00:05:10.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.348 EAL: Restoring previous memory policy: 4 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.348 EAL: Trying to obtain current memory policy. 00:05:10.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.348 EAL: Restoring previous memory policy: 4 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.348 EAL: request: mp_malloc_sync 00:05:10.348 EAL: No shared files mode enabled, IPC is disabled 00:05:10.348 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.606 EAL: request: mp_malloc_sync 00:05:10.606 EAL: No shared files mode enabled, IPC is disabled 00:05:10.606 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.606 EAL: Trying to obtain current memory policy. 00:05:10.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.606 EAL: Restoring previous memory policy: 4 00:05:10.606 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.606 EAL: request: mp_malloc_sync 00:05:10.606 EAL: No shared files mode enabled, IPC is disabled 00:05:10.606 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.606 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.606 EAL: request: mp_malloc_sync 00:05:10.606 EAL: No shared files mode enabled, IPC is disabled 00:05:10.606 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.606 EAL: Trying to obtain current memory policy. 00:05:10.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.606 EAL: Restoring previous memory policy: 4 00:05:10.606 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.606 EAL: request: mp_malloc_sync 00:05:10.606 EAL: No shared files mode enabled, IPC is disabled 00:05:10.606 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.864 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.864 EAL: request: mp_malloc_sync 00:05:10.864 EAL: No shared files mode enabled, IPC is disabled 00:05:10.864 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.864 EAL: Trying to obtain current memory policy. 00:05:10.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.123 EAL: Restoring previous memory policy: 4 00:05:11.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.123 EAL: request: mp_malloc_sync 00:05:11.123 EAL: No shared files mode enabled, IPC is disabled 00:05:11.123 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.382 EAL: request: mp_malloc_sync 00:05:11.382 EAL: No shared files mode enabled, IPC is disabled 00:05:11.382 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.382 passed 00:05:11.382 00:05:11.382 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.382 suites 1 1 n/a 0 0 00:05:11.382 tests 2 2 2 0 0 00:05:11.382 asserts 497 497 497 0 n/a 00:05:11.382 00:05:11.382 Elapsed time = 0.979 seconds 00:05:11.382 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.382 EAL: request: mp_malloc_sync 00:05:11.382 EAL: No shared files mode enabled, IPC is disabled 00:05:11.382 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.382 EAL: No shared files mode enabled, IPC is disabled 00:05:11.382 EAL: No shared files mode enabled, IPC is disabled 00:05:11.382 EAL: No shared files mode enabled, IPC is disabled 00:05:11.382 00:05:11.382 real 0m1.100s 00:05:11.382 user 0m0.635s 00:05:11.382 sys 0m0.442s 00:05:11.382 18:55:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.382 18:55:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.382 ************************************ 00:05:11.382 END TEST env_vtophys 00:05:11.382 ************************************ 00:05:11.382 18:55:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.382 18:55:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.382 18:55:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.382 18:55:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.382 ************************************ 00:05:11.382 START TEST env_pci 00:05:11.382 ************************************ 00:05:11.382 18:55:28 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.382 00:05:11.382 00:05:11.382 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.382 http://cunit.sourceforge.net/ 00:05:11.382 00:05:11.382 00:05:11.382 Suite: pci 00:05:11.382 Test: pci_hook ...[2024-11-26 18:55:28.571706] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2724758 has claimed it 00:05:11.641 EAL: Cannot find device (10000:00:01.0) 00:05:11.641 EAL: Failed to attach device on primary process 00:05:11.641 passed 00:05:11.641 00:05:11.641 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.641 suites 1 1 n/a 0 0 00:05:11.641 tests 1 1 1 0 0 00:05:11.641 asserts 25 25 25 0 n/a 00:05:11.641 00:05:11.641 Elapsed time = 0.032 seconds 00:05:11.641 00:05:11.641 real 0m0.053s 00:05:11.641 user 0m0.016s 00:05:11.641 sys 0m0.036s 00:05:11.641 18:55:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.641 18:55:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.641 ************************************ 00:05:11.641 END TEST env_pci 00:05:11.641 ************************************ 00:05:11.641 18:55:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.641 18:55:28 env -- env/env.sh@15 -- # uname 00:05:11.641 18:55:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.641 18:55:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.641 18:55:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.641 18:55:28 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:11.642 18:55:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.642 18:55:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.642 ************************************ 00:05:11.642 START TEST env_dpdk_post_init 00:05:11.642 ************************************ 00:05:11.642 18:55:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.642 EAL: Detected CPU lcores: 72 00:05:11.642 EAL: Detected NUMA nodes: 2 00:05:11.642 EAL: Detected static linkage of DPDK 00:05:11.642 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.642 EAL: Selected IOVA mode 'VA' 00:05:11.642 EAL: VFIO support initialized 00:05:11.642 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.642 EAL: Using IOMMU type 1 (Type 1) 00:05:12.576 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:17.845 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:17.845 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:05:18.104 Starting DPDK initialization... 00:05:18.104 Starting SPDK post initialization... 00:05:18.104 SPDK NVMe probe 00:05:18.104 Attaching to 0000:5e:00.0 00:05:18.104 Attached to 0000:5e:00.0 00:05:18.104 Cleaning up... 00:05:18.104 00:05:18.104 real 0m6.496s 00:05:18.104 user 0m4.752s 00:05:18.104 sys 0m0.996s 00:05:18.104 18:55:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.104 18:55:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.104 ************************************ 00:05:18.104 END TEST env_dpdk_post_init 00:05:18.104 ************************************ 00:05:18.104 18:55:35 env -- env/env.sh@26 -- # uname 00:05:18.104 18:55:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.104 18:55:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.104 18:55:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.104 18:55:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.104 18:55:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.104 ************************************ 00:05:18.104 START TEST env_mem_callbacks 00:05:18.104 ************************************ 00:05:18.104 18:55:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.104 EAL: Detected CPU lcores: 72 00:05:18.104 EAL: Detected NUMA nodes: 2 00:05:18.104 EAL: Detected static linkage of DPDK 00:05:18.104 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.363 EAL: Selected IOVA mode 'VA' 00:05:18.363 EAL: VFIO support initialized 00:05:18.363 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.363 00:05:18.363 00:05:18.363 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.363 http://cunit.sourceforge.net/ 00:05:18.363 00:05:18.363 00:05:18.363 Suite: memory 00:05:18.363 Test: test ... 00:05:18.363 register 0x200000200000 2097152 00:05:18.363 malloc 3145728 00:05:18.363 register 0x200000400000 4194304 00:05:18.363 buf 0x200000500000 len 3145728 PASSED 00:05:18.363 malloc 64 00:05:18.363 buf 0x2000004fff40 len 64 PASSED 00:05:18.363 malloc 4194304 00:05:18.363 register 0x200000800000 6291456 00:05:18.363 buf 0x200000a00000 len 4194304 PASSED 00:05:18.363 free 0x200000500000 3145728 00:05:18.363 free 0x2000004fff40 64 00:05:18.363 unregister 0x200000400000 4194304 PASSED 00:05:18.363 free 0x200000a00000 4194304 00:05:18.363 unregister 0x200000800000 6291456 PASSED 00:05:18.363 malloc 8388608 00:05:18.363 register 0x200000400000 10485760 00:05:18.363 buf 0x200000600000 len 8388608 PASSED 00:05:18.363 free 0x200000600000 8388608 00:05:18.363 unregister 0x200000400000 10485760 PASSED 00:05:18.363 passed 00:05:18.363 00:05:18.363 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.363 suites 1 1 n/a 0 0 00:05:18.363 tests 1 1 1 0 0 00:05:18.363 asserts 15 15 15 0 n/a 00:05:18.363 00:05:18.364 Elapsed time = 0.005 seconds 00:05:18.364 00:05:18.364 real 0m0.065s 00:05:18.364 user 0m0.018s 00:05:18.364 sys 0m0.047s 00:05:18.364 18:55:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.364 18:55:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:18.364 ************************************ 00:05:18.364 END TEST env_mem_callbacks 00:05:18.364 ************************************ 00:05:18.364 00:05:18.364 real 0m8.396s 00:05:18.364 user 0m5.765s 00:05:18.364 sys 0m1.902s 00:05:18.364 18:55:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.364 18:55:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.364 ************************************ 00:05:18.364 END TEST env 00:05:18.364 ************************************ 00:05:18.364 18:55:35 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.364 18:55:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.364 18:55:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.364 18:55:35 -- common/autotest_common.sh@10 -- # set +x 00:05:18.364 ************************************ 00:05:18.364 START TEST rpc 00:05:18.364 ************************************ 00:05:18.364 18:55:35 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.364 * Looking for test storage... 00:05:18.364 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:18.364 18:55:35 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.364 18:55:35 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.364 18:55:35 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.623 18:55:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.623 18:55:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.623 18:55:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.623 18:55:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.623 18:55:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.623 18:55:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.623 18:55:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.623 18:55:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.623 18:55:35 rpc -- scripts/common.sh@345 -- # : 1 00:05:18.623 18:55:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.623 18:55:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.623 18:55:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.623 18:55:35 rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.623 18:55:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.623 18:55:35 rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.623 18:55:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.623 18:55:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.623 18:55:35 rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.623 18:55:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.623 18:55:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.623 18:55:35 rpc -- scripts/common.sh@368 -- # return 0 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.623 --rc genhtml_branch_coverage=1 00:05:18.623 --rc genhtml_function_coverage=1 00:05:18.623 --rc genhtml_legend=1 00:05:18.623 --rc geninfo_all_blocks=1 00:05:18.623 --rc geninfo_unexecuted_blocks=1 00:05:18.623 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.623 ' 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.623 --rc genhtml_branch_coverage=1 00:05:18.623 --rc genhtml_function_coverage=1 00:05:18.623 --rc genhtml_legend=1 00:05:18.623 --rc geninfo_all_blocks=1 00:05:18.623 --rc geninfo_unexecuted_blocks=1 00:05:18.623 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.623 ' 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.623 --rc genhtml_branch_coverage=1 00:05:18.623 --rc genhtml_function_coverage=1 00:05:18.623 --rc genhtml_legend=1 00:05:18.623 --rc geninfo_all_blocks=1 00:05:18.623 --rc geninfo_unexecuted_blocks=1 00:05:18.623 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.623 ' 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.623 --rc genhtml_branch_coverage=1 00:05:18.623 --rc genhtml_function_coverage=1 00:05:18.623 --rc genhtml_legend=1 00:05:18.623 --rc geninfo_all_blocks=1 00:05:18.623 --rc geninfo_unexecuted_blocks=1 00:05:18.623 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.623 ' 00:05:18.623 18:55:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2725850 00:05:18.623 18:55:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.623 18:55:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2725850 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@835 -- # '[' -z 2725850 ']' 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.623 18:55:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.623 18:55:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:18.623 [2024-11-26 18:55:35.661378] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:18.624 [2024-11-26 18:55:35.661442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725850 ] 00:05:18.624 [2024-11-26 18:55:35.731677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.624 [2024-11-26 18:55:35.779731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:18.624 [2024-11-26 18:55:35.779768] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2725850' to capture a snapshot of events at runtime. 00:05:18.624 [2024-11-26 18:55:35.779784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.624 [2024-11-26 18:55:35.779795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.624 [2024-11-26 18:55:35.779805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2725850 for offline analysis/debug. 00:05:18.624 [2024-11-26 18:55:35.780252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.883 18:55:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.883 18:55:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.883 18:55:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:18.883 18:55:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:18.883 18:55:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.883 18:55:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.883 18:55:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.883 18:55:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.883 18:55:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.883 ************************************ 00:05:18.883 START TEST rpc_integrity 00:05:18.883 ************************************ 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:18.883 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.883 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.883 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.883 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.883 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.883 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:18.883 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.883 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.142 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.142 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.142 { 00:05:19.142 "name": "Malloc0", 00:05:19.142 "aliases": [ 00:05:19.142 "07c14674-78d3-4e12-a9da-0ac417c02211" 00:05:19.142 ], 00:05:19.142 "product_name": "Malloc disk", 00:05:19.142 "block_size": 512, 00:05:19.142 "num_blocks": 16384, 00:05:19.142 "uuid": "07c14674-78d3-4e12-a9da-0ac417c02211", 00:05:19.142 "assigned_rate_limits": { 00:05:19.142 "rw_ios_per_sec": 0, 00:05:19.142 "rw_mbytes_per_sec": 0, 00:05:19.142 "r_mbytes_per_sec": 0, 00:05:19.142 "w_mbytes_per_sec": 0 00:05:19.142 }, 00:05:19.142 "claimed": false, 00:05:19.142 "zoned": false, 00:05:19.142 "supported_io_types": { 00:05:19.142 "read": true, 00:05:19.142 "write": true, 00:05:19.142 "unmap": true, 00:05:19.142 "flush": true, 00:05:19.142 "reset": true, 00:05:19.142 "nvme_admin": false, 00:05:19.142 "nvme_io": false, 00:05:19.142 "nvme_io_md": false, 00:05:19.142 "write_zeroes": true, 00:05:19.142 "zcopy": true, 00:05:19.142 "get_zone_info": false, 00:05:19.142 "zone_management": false, 00:05:19.142 "zone_append": false, 00:05:19.142 "compare": false, 00:05:19.142 "compare_and_write": false, 00:05:19.142 "abort": true, 00:05:19.142 "seek_hole": false, 00:05:19.142 "seek_data": false, 00:05:19.142 "copy": true, 00:05:19.142 "nvme_iov_md": false 00:05:19.142 }, 00:05:19.142 "memory_domains": [ 00:05:19.142 { 00:05:19.142 "dma_device_id": "system", 00:05:19.142 "dma_device_type": 1 00:05:19.142 }, 00:05:19.142 { 00:05:19.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.142 "dma_device_type": 2 00:05:19.142 } 00:05:19.142 ], 00:05:19.142 "driver_specific": {} 00:05:19.142 } 00:05:19.142 ]' 00:05:19.142 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.142 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.142 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.142 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.142 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.142 [2024-11-26 18:55:36.141747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.142 [2024-11-26 18:55:36.141783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.142 [2024-11-26 18:55:36.141809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x58696c0 00:05:19.142 [2024-11-26 18:55:36.141822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.142 [2024-11-26 18:55:36.142807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.142 [2024-11-26 18:55:36.142831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.142 Passthru0 00:05:19.142 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.142 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.142 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.142 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.142 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.142 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.142 { 00:05:19.142 "name": "Malloc0", 00:05:19.142 "aliases": [ 00:05:19.142 "07c14674-78d3-4e12-a9da-0ac417c02211" 00:05:19.142 ], 00:05:19.142 "product_name": "Malloc disk", 00:05:19.142 "block_size": 512, 00:05:19.142 "num_blocks": 16384, 00:05:19.142 "uuid": "07c14674-78d3-4e12-a9da-0ac417c02211", 00:05:19.142 "assigned_rate_limits": { 00:05:19.143 "rw_ios_per_sec": 0, 00:05:19.143 "rw_mbytes_per_sec": 0, 00:05:19.143 "r_mbytes_per_sec": 0, 00:05:19.143 "w_mbytes_per_sec": 0 00:05:19.143 }, 00:05:19.143 "claimed": true, 00:05:19.143 "claim_type": "exclusive_write", 00:05:19.143 "zoned": false, 00:05:19.143 "supported_io_types": { 00:05:19.143 "read": true, 00:05:19.143 "write": true, 00:05:19.143 "unmap": true, 00:05:19.143 "flush": true, 00:05:19.143 "reset": true, 00:05:19.143 "nvme_admin": false, 00:05:19.143 "nvme_io": false, 00:05:19.143 "nvme_io_md": false, 00:05:19.143 "write_zeroes": true, 00:05:19.143 "zcopy": true, 00:05:19.143 "get_zone_info": false, 00:05:19.143 "zone_management": false, 00:05:19.143 "zone_append": false, 00:05:19.143 "compare": false, 00:05:19.143 "compare_and_write": false, 00:05:19.143 "abort": true, 00:05:19.143 "seek_hole": false, 00:05:19.143 "seek_data": false, 00:05:19.143 "copy": true, 00:05:19.143 "nvme_iov_md": false 00:05:19.143 }, 00:05:19.143 "memory_domains": [ 00:05:19.143 { 00:05:19.143 "dma_device_id": "system", 00:05:19.143 "dma_device_type": 1 00:05:19.143 }, 00:05:19.143 { 00:05:19.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.143 "dma_device_type": 2 00:05:19.143 } 00:05:19.143 ], 00:05:19.143 "driver_specific": {} 00:05:19.143 }, 00:05:19.143 { 00:05:19.143 "name": "Passthru0", 00:05:19.143 "aliases": [ 00:05:19.143 "22fd0626-5182-5a8c-a9fa-1df760a57fba" 00:05:19.143 ], 00:05:19.143 "product_name": "passthru", 00:05:19.143 "block_size": 512, 00:05:19.143 "num_blocks": 16384, 00:05:19.143 "uuid": "22fd0626-5182-5a8c-a9fa-1df760a57fba", 00:05:19.143 "assigned_rate_limits": { 00:05:19.143 "rw_ios_per_sec": 0, 00:05:19.143 "rw_mbytes_per_sec": 0, 00:05:19.143 "r_mbytes_per_sec": 0, 00:05:19.143 "w_mbytes_per_sec": 0 00:05:19.143 }, 00:05:19.143 "claimed": false, 00:05:19.143 "zoned": false, 00:05:19.143 "supported_io_types": { 00:05:19.143 "read": true, 00:05:19.143 "write": true, 00:05:19.143 "unmap": true, 00:05:19.143 "flush": true, 00:05:19.143 "reset": true, 00:05:19.143 "nvme_admin": false, 00:05:19.143 "nvme_io": false, 00:05:19.143 "nvme_io_md": false, 00:05:19.143 "write_zeroes": true, 00:05:19.143 "zcopy": true, 00:05:19.143 "get_zone_info": false, 00:05:19.143 "zone_management": false, 00:05:19.143 "zone_append": false, 00:05:19.143 "compare": false, 00:05:19.143 "compare_and_write": false, 00:05:19.143 "abort": true, 00:05:19.143 "seek_hole": false, 00:05:19.143 "seek_data": false, 00:05:19.143 "copy": true, 00:05:19.143 "nvme_iov_md": false 00:05:19.143 }, 00:05:19.143 "memory_domains": [ 00:05:19.143 { 00:05:19.143 "dma_device_id": "system", 00:05:19.143 "dma_device_type": 1 00:05:19.143 }, 00:05:19.143 { 00:05:19.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.143 "dma_device_type": 2 00:05:19.143 } 00:05:19.143 ], 00:05:19.143 "driver_specific": { 00:05:19.143 "passthru": { 00:05:19.143 "name": "Passthru0", 00:05:19.143 "base_bdev_name": "Malloc0" 00:05:19.143 } 00:05:19.143 } 00:05:19.143 } 00:05:19.143 ]' 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.143 18:55:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.143 00:05:19.143 real 0m0.263s 00:05:19.143 user 0m0.154s 00:05:19.143 sys 0m0.041s 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.143 18:55:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.143 ************************************ 00:05:19.143 END TEST rpc_integrity 00:05:19.143 ************************************ 00:05:19.143 18:55:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:19.143 18:55:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.143 18:55:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.143 18:55:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 ************************************ 00:05:19.402 START TEST rpc_plugins 00:05:19.402 ************************************ 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:19.402 { 00:05:19.402 "name": "Malloc1", 00:05:19.402 "aliases": [ 00:05:19.402 "7a890ab7-e688-4858-b6b2-90eb52f2c019" 00:05:19.402 ], 00:05:19.402 "product_name": "Malloc disk", 00:05:19.402 "block_size": 4096, 00:05:19.402 "num_blocks": 256, 00:05:19.402 "uuid": "7a890ab7-e688-4858-b6b2-90eb52f2c019", 00:05:19.402 "assigned_rate_limits": { 00:05:19.402 "rw_ios_per_sec": 0, 00:05:19.402 "rw_mbytes_per_sec": 0, 00:05:19.402 "r_mbytes_per_sec": 0, 00:05:19.402 "w_mbytes_per_sec": 0 00:05:19.402 }, 00:05:19.402 "claimed": false, 00:05:19.402 "zoned": false, 00:05:19.402 "supported_io_types": { 00:05:19.402 "read": true, 00:05:19.402 "write": true, 00:05:19.402 "unmap": true, 00:05:19.402 "flush": true, 00:05:19.402 "reset": true, 00:05:19.402 "nvme_admin": false, 00:05:19.402 "nvme_io": false, 00:05:19.402 "nvme_io_md": false, 00:05:19.402 "write_zeroes": true, 00:05:19.402 "zcopy": true, 00:05:19.402 "get_zone_info": false, 00:05:19.402 "zone_management": false, 00:05:19.402 "zone_append": false, 00:05:19.402 "compare": false, 00:05:19.402 "compare_and_write": false, 00:05:19.402 "abort": true, 00:05:19.402 "seek_hole": false, 00:05:19.402 "seek_data": false, 00:05:19.402 "copy": true, 00:05:19.402 "nvme_iov_md": false 00:05:19.402 }, 00:05:19.402 "memory_domains": [ 00:05:19.402 { 00:05:19.402 "dma_device_id": "system", 00:05:19.402 "dma_device_type": 1 00:05:19.402 }, 00:05:19.402 { 00:05:19.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.402 "dma_device_type": 2 00:05:19.402 } 00:05:19.402 ], 00:05:19.402 "driver_specific": {} 00:05:19.402 } 00:05:19.402 ]' 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:19.402 18:55:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:19.402 00:05:19.402 real 0m0.147s 00:05:19.402 user 0m0.082s 00:05:19.402 sys 0m0.026s 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.402 18:55:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 ************************************ 00:05:19.402 END TEST rpc_plugins 00:05:19.402 ************************************ 00:05:19.402 18:55:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:19.402 18:55:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.402 18:55:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.402 18:55:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 ************************************ 00:05:19.402 START TEST rpc_trace_cmd_test 00:05:19.402 ************************************ 00:05:19.402 18:55:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:19.402 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:19.402 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:19.402 18:55:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.402 18:55:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.402 18:55:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.402 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:19.402 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2725850", 00:05:19.402 "tpoint_group_mask": "0x8", 00:05:19.402 "iscsi_conn": { 00:05:19.402 "mask": "0x2", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "scsi": { 00:05:19.402 "mask": "0x4", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "bdev": { 00:05:19.402 "mask": "0x8", 00:05:19.402 "tpoint_mask": "0xffffffffffffffff" 00:05:19.402 }, 00:05:19.402 "nvmf_rdma": { 00:05:19.402 "mask": "0x10", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "nvmf_tcp": { 00:05:19.402 "mask": "0x20", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "ftl": { 00:05:19.402 "mask": "0x40", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "blobfs": { 00:05:19.402 "mask": "0x80", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "dsa": { 00:05:19.402 "mask": "0x200", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "thread": { 00:05:19.402 "mask": "0x400", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.402 "nvme_pcie": { 00:05:19.402 "mask": "0x800", 00:05:19.402 "tpoint_mask": "0x0" 00:05:19.402 }, 00:05:19.403 "iaa": { 00:05:19.403 "mask": "0x1000", 00:05:19.403 "tpoint_mask": "0x0" 00:05:19.403 }, 00:05:19.403 "nvme_tcp": { 00:05:19.403 "mask": "0x2000", 00:05:19.403 "tpoint_mask": "0x0" 00:05:19.403 }, 00:05:19.403 "bdev_nvme": { 00:05:19.403 "mask": "0x4000", 00:05:19.403 "tpoint_mask": "0x0" 00:05:19.403 }, 00:05:19.403 "sock": { 00:05:19.403 "mask": "0x8000", 00:05:19.403 "tpoint_mask": "0x0" 00:05:19.403 }, 00:05:19.403 "blob": { 00:05:19.403 "mask": "0x10000", 00:05:19.403 "tpoint_mask": "0x0" 00:05:19.403 }, 00:05:19.403 "bdev_raid": { 00:05:19.403 "mask": "0x20000", 00:05:19.403 "tpoint_mask": "0x0" 00:05:19.403 }, 00:05:19.403 "scheduler": { 00:05:19.403 "mask": "0x40000", 00:05:19.403 "tpoint_mask": "0x0" 00:05:19.403 } 00:05:19.403 }' 00:05:19.403 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:19.662 00:05:19.662 real 0m0.226s 00:05:19.662 user 0m0.187s 00:05:19.662 sys 0m0.031s 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.662 18:55:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.662 ************************************ 00:05:19.662 END TEST rpc_trace_cmd_test 00:05:19.662 ************************************ 00:05:19.662 18:55:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:19.662 18:55:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:19.662 18:55:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:19.662 18:55:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.662 18:55:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.662 18:55:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 ************************************ 00:05:19.923 START TEST rpc_daemon_integrity 00:05:19.923 ************************************ 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.923 { 00:05:19.923 "name": "Malloc2", 00:05:19.923 "aliases": [ 00:05:19.923 "179a3c06-1337-4a69-aa3a-648edd8a1d81" 00:05:19.923 ], 00:05:19.923 "product_name": "Malloc disk", 00:05:19.923 "block_size": 512, 00:05:19.923 "num_blocks": 16384, 00:05:19.923 "uuid": "179a3c06-1337-4a69-aa3a-648edd8a1d81", 00:05:19.923 "assigned_rate_limits": { 00:05:19.923 "rw_ios_per_sec": 0, 00:05:19.923 "rw_mbytes_per_sec": 0, 00:05:19.923 "r_mbytes_per_sec": 0, 00:05:19.923 "w_mbytes_per_sec": 0 00:05:19.923 }, 00:05:19.923 "claimed": false, 00:05:19.923 "zoned": false, 00:05:19.923 "supported_io_types": { 00:05:19.923 "read": true, 00:05:19.923 "write": true, 00:05:19.923 "unmap": true, 00:05:19.923 "flush": true, 00:05:19.923 "reset": true, 00:05:19.923 "nvme_admin": false, 00:05:19.923 "nvme_io": false, 00:05:19.923 "nvme_io_md": false, 00:05:19.923 "write_zeroes": true, 00:05:19.923 "zcopy": true, 00:05:19.923 "get_zone_info": false, 00:05:19.923 "zone_management": false, 00:05:19.923 "zone_append": false, 00:05:19.923 "compare": false, 00:05:19.923 "compare_and_write": false, 00:05:19.923 "abort": true, 00:05:19.923 "seek_hole": false, 00:05:19.923 "seek_data": false, 00:05:19.923 "copy": true, 00:05:19.923 "nvme_iov_md": false 00:05:19.923 }, 00:05:19.923 "memory_domains": [ 00:05:19.923 { 00:05:19.923 "dma_device_id": "system", 00:05:19.923 "dma_device_type": 1 00:05:19.923 }, 00:05:19.923 { 00:05:19.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.923 "dma_device_type": 2 00:05:19.923 } 00:05:19.923 ], 00:05:19.923 "driver_specific": {} 00:05:19.923 } 00:05:19.923 ]' 00:05:19.923 18:55:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 [2024-11-26 18:55:37.016023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:19.923 [2024-11-26 18:55:37.016060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.923 [2024-11-26 18:55:37.016086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x598b730 00:05:19.923 [2024-11-26 18:55:37.016101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.923 [2024-11-26 18:55:37.017054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.923 [2024-11-26 18:55:37.017079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.923 Passthru0 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.923 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.923 { 00:05:19.923 "name": "Malloc2", 00:05:19.923 "aliases": [ 00:05:19.923 "179a3c06-1337-4a69-aa3a-648edd8a1d81" 00:05:19.923 ], 00:05:19.923 "product_name": "Malloc disk", 00:05:19.923 "block_size": 512, 00:05:19.923 "num_blocks": 16384, 00:05:19.923 "uuid": "179a3c06-1337-4a69-aa3a-648edd8a1d81", 00:05:19.923 "assigned_rate_limits": { 00:05:19.923 "rw_ios_per_sec": 0, 00:05:19.923 "rw_mbytes_per_sec": 0, 00:05:19.923 "r_mbytes_per_sec": 0, 00:05:19.923 "w_mbytes_per_sec": 0 00:05:19.923 }, 00:05:19.923 "claimed": true, 00:05:19.923 "claim_type": "exclusive_write", 00:05:19.923 "zoned": false, 00:05:19.923 "supported_io_types": { 00:05:19.923 "read": true, 00:05:19.923 "write": true, 00:05:19.923 "unmap": true, 00:05:19.923 "flush": true, 00:05:19.923 "reset": true, 00:05:19.923 "nvme_admin": false, 00:05:19.923 "nvme_io": false, 00:05:19.923 "nvme_io_md": false, 00:05:19.923 "write_zeroes": true, 00:05:19.923 "zcopy": true, 00:05:19.923 "get_zone_info": false, 00:05:19.923 "zone_management": false, 00:05:19.923 "zone_append": false, 00:05:19.923 "compare": false, 00:05:19.923 "compare_and_write": false, 00:05:19.923 "abort": true, 00:05:19.923 "seek_hole": false, 00:05:19.923 "seek_data": false, 00:05:19.923 "copy": true, 00:05:19.923 "nvme_iov_md": false 00:05:19.923 }, 00:05:19.923 "memory_domains": [ 00:05:19.923 { 00:05:19.923 "dma_device_id": "system", 00:05:19.923 "dma_device_type": 1 00:05:19.923 }, 00:05:19.923 { 00:05:19.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.923 "dma_device_type": 2 00:05:19.923 } 00:05:19.923 ], 00:05:19.923 "driver_specific": {} 00:05:19.923 }, 00:05:19.923 { 00:05:19.923 "name": "Passthru0", 00:05:19.923 "aliases": [ 00:05:19.923 "9138fc6f-bedb-51b3-96da-435a9ae8a9aa" 00:05:19.923 ], 00:05:19.923 "product_name": "passthru", 00:05:19.923 "block_size": 512, 00:05:19.923 "num_blocks": 16384, 00:05:19.923 "uuid": "9138fc6f-bedb-51b3-96da-435a9ae8a9aa", 00:05:19.923 "assigned_rate_limits": { 00:05:19.923 "rw_ios_per_sec": 0, 00:05:19.923 "rw_mbytes_per_sec": 0, 00:05:19.923 "r_mbytes_per_sec": 0, 00:05:19.923 "w_mbytes_per_sec": 0 00:05:19.923 }, 00:05:19.923 "claimed": false, 00:05:19.923 "zoned": false, 00:05:19.923 "supported_io_types": { 00:05:19.923 "read": true, 00:05:19.923 "write": true, 00:05:19.923 "unmap": true, 00:05:19.923 "flush": true, 00:05:19.923 "reset": true, 00:05:19.923 "nvme_admin": false, 00:05:19.923 "nvme_io": false, 00:05:19.923 "nvme_io_md": false, 00:05:19.923 "write_zeroes": true, 00:05:19.923 "zcopy": true, 00:05:19.923 "get_zone_info": false, 00:05:19.923 "zone_management": false, 00:05:19.923 "zone_append": false, 00:05:19.923 "compare": false, 00:05:19.923 "compare_and_write": false, 00:05:19.923 "abort": true, 00:05:19.923 "seek_hole": false, 00:05:19.923 "seek_data": false, 00:05:19.923 "copy": true, 00:05:19.923 "nvme_iov_md": false 00:05:19.923 }, 00:05:19.924 "memory_domains": [ 00:05:19.924 { 00:05:19.924 "dma_device_id": "system", 00:05:19.924 "dma_device_type": 1 00:05:19.924 }, 00:05:19.924 { 00:05:19.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.924 "dma_device_type": 2 00:05:19.924 } 00:05:19.924 ], 00:05:19.924 "driver_specific": { 00:05:19.924 "passthru": { 00:05:19.924 "name": "Passthru0", 00:05:19.924 "base_bdev_name": "Malloc2" 00:05:19.924 } 00:05:19.924 } 00:05:19.924 } 00:05:19.924 ]' 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.924 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.183 18:55:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.183 00:05:20.183 real 0m0.258s 00:05:20.183 user 0m0.153s 00:05:20.183 sys 0m0.046s 00:05:20.183 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.183 18:55:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.183 ************************************ 00:05:20.183 END TEST rpc_daemon_integrity 00:05:20.183 ************************************ 00:05:20.183 18:55:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:20.183 18:55:37 rpc -- rpc/rpc.sh@84 -- # killprocess 2725850 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 2725850 ']' 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@958 -- # kill -0 2725850 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@959 -- # uname 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725850 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725850' 00:05:20.183 killing process with pid 2725850 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@973 -- # kill 2725850 00:05:20.183 18:55:37 rpc -- common/autotest_common.sh@978 -- # wait 2725850 00:05:20.441 00:05:20.442 real 0m2.085s 00:05:20.442 user 0m2.606s 00:05:20.442 sys 0m0.755s 00:05:20.442 18:55:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.442 18:55:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.442 ************************************ 00:05:20.442 END TEST rpc 00:05:20.442 ************************************ 00:05:20.442 18:55:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:20.442 18:55:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.442 18:55:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.442 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:20.442 ************************************ 00:05:20.442 START TEST skip_rpc 00:05:20.442 ************************************ 00:05:20.442 18:55:37 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:20.701 * Looking for test storage... 00:05:20.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.701 18:55:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.701 --rc genhtml_branch_coverage=1 00:05:20.701 --rc genhtml_function_coverage=1 00:05:20.701 --rc genhtml_legend=1 00:05:20.701 --rc geninfo_all_blocks=1 00:05:20.701 --rc geninfo_unexecuted_blocks=1 00:05:20.701 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.701 ' 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.701 --rc genhtml_branch_coverage=1 00:05:20.701 --rc genhtml_function_coverage=1 00:05:20.701 --rc genhtml_legend=1 00:05:20.701 --rc geninfo_all_blocks=1 00:05:20.701 --rc geninfo_unexecuted_blocks=1 00:05:20.701 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.701 ' 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.701 --rc genhtml_branch_coverage=1 00:05:20.701 --rc genhtml_function_coverage=1 00:05:20.701 --rc genhtml_legend=1 00:05:20.701 --rc geninfo_all_blocks=1 00:05:20.701 --rc geninfo_unexecuted_blocks=1 00:05:20.701 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.701 ' 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.701 --rc genhtml_branch_coverage=1 00:05:20.701 --rc genhtml_function_coverage=1 00:05:20.701 --rc genhtml_legend=1 00:05:20.701 --rc geninfo_all_blocks=1 00:05:20.701 --rc geninfo_unexecuted_blocks=1 00:05:20.701 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.701 ' 00:05:20.701 18:55:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:20.701 18:55:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:20.701 18:55:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.701 18:55:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.701 ************************************ 00:05:20.701 START TEST skip_rpc 00:05:20.701 ************************************ 00:05:20.701 18:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:20.701 18:55:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2726303 00:05:20.701 18:55:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.701 18:55:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:20.701 18:55:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:20.701 [2024-11-26 18:55:37.836592] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:20.701 [2024-11-26 18:55:37.836673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726303 ] 00:05:20.701 [2024-11-26 18:55:37.908367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.960 [2024-11-26 18:55:37.953545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2726303 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2726303 ']' 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2726303 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726303 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726303' 00:05:26.226 killing process with pid 2726303 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2726303 00:05:26.226 18:55:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2726303 00:05:26.226 00:05:26.226 real 0m5.373s 00:05:26.226 user 0m5.129s 00:05:26.226 sys 0m0.282s 00:05:26.226 18:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.226 18:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 ************************************ 00:05:26.226 END TEST skip_rpc 00:05:26.226 ************************************ 00:05:26.226 18:55:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:26.226 18:55:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.226 18:55:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.226 18:55:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 ************************************ 00:05:26.226 START TEST skip_rpc_with_json 00:05:26.226 ************************************ 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2727113 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2727113 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2727113 ']' 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.226 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 [2024-11-26 18:55:43.286908] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:26.226 [2024-11-26 18:55:43.286968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727113 ] 00:05:26.226 [2024-11-26 18:55:43.358354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.226 [2024-11-26 18:55:43.402806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.486 [2024-11-26 18:55:43.616805] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:26.486 request: 00:05:26.486 { 00:05:26.486 "trtype": "tcp", 00:05:26.486 "method": "nvmf_get_transports", 00:05:26.486 "req_id": 1 00:05:26.486 } 00:05:26.486 Got JSON-RPC error response 00:05:26.486 response: 00:05:26.486 { 00:05:26.486 "code": -19, 00:05:26.486 "message": "No such device" 00:05:26.486 } 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.486 [2024-11-26 18:55:43.628918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.486 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.745 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.745 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:26.745 { 00:05:26.745 "subsystems": [ 00:05:26.745 { 00:05:26.745 "subsystem": "scheduler", 00:05:26.745 "config": [ 00:05:26.745 { 00:05:26.745 "method": "framework_set_scheduler", 00:05:26.745 "params": { 00:05:26.745 "name": "static" 00:05:26.745 } 00:05:26.745 } 00:05:26.745 ] 00:05:26.745 }, 00:05:26.745 { 00:05:26.745 "subsystem": "vmd", 00:05:26.745 "config": [] 00:05:26.745 }, 00:05:26.745 { 00:05:26.745 "subsystem": "sock", 00:05:26.745 "config": [ 00:05:26.745 { 00:05:26.745 "method": "sock_set_default_impl", 00:05:26.745 "params": { 00:05:26.745 "impl_name": "posix" 00:05:26.745 } 00:05:26.745 }, 00:05:26.745 { 00:05:26.745 "method": "sock_impl_set_options", 00:05:26.745 "params": { 00:05:26.745 "impl_name": "ssl", 00:05:26.745 "recv_buf_size": 4096, 00:05:26.745 "send_buf_size": 4096, 00:05:26.745 "enable_recv_pipe": true, 00:05:26.745 "enable_quickack": false, 00:05:26.745 "enable_placement_id": 0, 00:05:26.745 "enable_zerocopy_send_server": true, 00:05:26.745 "enable_zerocopy_send_client": false, 00:05:26.745 "zerocopy_threshold": 0, 00:05:26.745 "tls_version": 0, 00:05:26.745 "enable_ktls": false 00:05:26.745 } 00:05:26.745 }, 00:05:26.745 { 00:05:26.745 "method": "sock_impl_set_options", 00:05:26.745 "params": { 00:05:26.745 "impl_name": "posix", 00:05:26.745 "recv_buf_size": 2097152, 00:05:26.745 "send_buf_size": 2097152, 00:05:26.745 "enable_recv_pipe": true, 00:05:26.745 "enable_quickack": false, 00:05:26.745 "enable_placement_id": 0, 00:05:26.745 "enable_zerocopy_send_server": true, 00:05:26.745 "enable_zerocopy_send_client": false, 00:05:26.745 "zerocopy_threshold": 0, 00:05:26.745 "tls_version": 0, 00:05:26.745 "enable_ktls": false 00:05:26.745 } 00:05:26.745 } 00:05:26.745 ] 00:05:26.745 }, 00:05:26.745 { 00:05:26.745 "subsystem": "iobuf", 00:05:26.745 "config": [ 00:05:26.745 { 00:05:26.745 "method": "iobuf_set_options", 00:05:26.745 "params": { 00:05:26.745 "small_pool_count": 8192, 00:05:26.745 "large_pool_count": 1024, 00:05:26.745 "small_bufsize": 8192, 00:05:26.745 "large_bufsize": 135168, 00:05:26.745 "enable_numa": false 00:05:26.745 } 00:05:26.745 } 00:05:26.746 ] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "keyring", 00:05:26.746 "config": [] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "vfio_user_target", 00:05:26.746 "config": null 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "fsdev", 00:05:26.746 "config": [ 00:05:26.746 { 00:05:26.746 "method": "fsdev_set_opts", 00:05:26.746 "params": { 00:05:26.746 "fsdev_io_pool_size": 65535, 00:05:26.746 "fsdev_io_cache_size": 256 00:05:26.746 } 00:05:26.746 } 00:05:26.746 ] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "accel", 00:05:26.746 "config": [ 00:05:26.746 { 00:05:26.746 "method": "accel_set_options", 00:05:26.746 "params": { 00:05:26.746 "small_cache_size": 128, 00:05:26.746 "large_cache_size": 16, 00:05:26.746 "task_count": 2048, 00:05:26.746 "sequence_count": 2048, 00:05:26.746 "buf_count": 2048 00:05:26.746 } 00:05:26.746 } 00:05:26.746 ] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "bdev", 00:05:26.746 "config": [ 00:05:26.746 { 00:05:26.746 "method": "bdev_set_options", 00:05:26.746 "params": { 00:05:26.746 "bdev_io_pool_size": 65535, 00:05:26.746 "bdev_io_cache_size": 256, 00:05:26.746 "bdev_auto_examine": true, 00:05:26.746 "iobuf_small_cache_size": 128, 00:05:26.746 "iobuf_large_cache_size": 16 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "bdev_raid_set_options", 00:05:26.746 "params": { 00:05:26.746 "process_window_size_kb": 1024, 00:05:26.746 "process_max_bandwidth_mb_sec": 0 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "bdev_nvme_set_options", 00:05:26.746 "params": { 00:05:26.746 "action_on_timeout": "none", 00:05:26.746 "timeout_us": 0, 00:05:26.746 "timeout_admin_us": 0, 00:05:26.746 "keep_alive_timeout_ms": 10000, 00:05:26.746 "arbitration_burst": 0, 00:05:26.746 "low_priority_weight": 0, 00:05:26.746 "medium_priority_weight": 0, 00:05:26.746 "high_priority_weight": 0, 00:05:26.746 "nvme_adminq_poll_period_us": 10000, 00:05:26.746 "nvme_ioq_poll_period_us": 0, 00:05:26.746 "io_queue_requests": 0, 00:05:26.746 "delay_cmd_submit": true, 00:05:26.746 "transport_retry_count": 4, 00:05:26.746 "bdev_retry_count": 3, 00:05:26.746 "transport_ack_timeout": 0, 00:05:26.746 "ctrlr_loss_timeout_sec": 0, 00:05:26.746 "reconnect_delay_sec": 0, 00:05:26.746 "fast_io_fail_timeout_sec": 0, 00:05:26.746 "disable_auto_failback": false, 00:05:26.746 "generate_uuids": false, 00:05:26.746 "transport_tos": 0, 00:05:26.746 "nvme_error_stat": false, 00:05:26.746 "rdma_srq_size": 0, 00:05:26.746 "io_path_stat": false, 00:05:26.746 "allow_accel_sequence": false, 00:05:26.746 "rdma_max_cq_size": 0, 00:05:26.746 "rdma_cm_event_timeout_ms": 0, 00:05:26.746 "dhchap_digests": [ 00:05:26.746 "sha256", 00:05:26.746 "sha384", 00:05:26.746 "sha512" 00:05:26.746 ], 00:05:26.746 "dhchap_dhgroups": [ 00:05:26.746 "null", 00:05:26.746 "ffdhe2048", 00:05:26.746 "ffdhe3072", 00:05:26.746 "ffdhe4096", 00:05:26.746 "ffdhe6144", 00:05:26.746 "ffdhe8192" 00:05:26.746 ] 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "bdev_nvme_set_hotplug", 00:05:26.746 "params": { 00:05:26.746 "period_us": 100000, 00:05:26.746 "enable": false 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "bdev_iscsi_set_options", 00:05:26.746 "params": { 00:05:26.746 "timeout_sec": 30 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "bdev_wait_for_examine" 00:05:26.746 } 00:05:26.746 ] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "nvmf", 00:05:26.746 "config": [ 00:05:26.746 { 00:05:26.746 "method": "nvmf_set_config", 00:05:26.746 "params": { 00:05:26.746 "discovery_filter": "match_any", 00:05:26.746 "admin_cmd_passthru": { 00:05:26.746 "identify_ctrlr": false 00:05:26.746 }, 00:05:26.746 "dhchap_digests": [ 00:05:26.746 "sha256", 00:05:26.746 "sha384", 00:05:26.746 "sha512" 00:05:26.746 ], 00:05:26.746 "dhchap_dhgroups": [ 00:05:26.746 "null", 00:05:26.746 "ffdhe2048", 00:05:26.746 "ffdhe3072", 00:05:26.746 "ffdhe4096", 00:05:26.746 "ffdhe6144", 00:05:26.746 "ffdhe8192" 00:05:26.746 ] 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "nvmf_set_max_subsystems", 00:05:26.746 "params": { 00:05:26.746 "max_subsystems": 1024 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "nvmf_set_crdt", 00:05:26.746 "params": { 00:05:26.746 "crdt1": 0, 00:05:26.746 "crdt2": 0, 00:05:26.746 "crdt3": 0 00:05:26.746 } 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "method": "nvmf_create_transport", 00:05:26.746 "params": { 00:05:26.746 "trtype": "TCP", 00:05:26.746 "max_queue_depth": 128, 00:05:26.746 "max_io_qpairs_per_ctrlr": 127, 00:05:26.746 "in_capsule_data_size": 4096, 00:05:26.746 "max_io_size": 131072, 00:05:26.746 "io_unit_size": 131072, 00:05:26.746 "max_aq_depth": 128, 00:05:26.746 "num_shared_buffers": 511, 00:05:26.746 "buf_cache_size": 4294967295, 00:05:26.746 "dif_insert_or_strip": false, 00:05:26.746 "zcopy": false, 00:05:26.746 "c2h_success": true, 00:05:26.746 "sock_priority": 0, 00:05:26.746 "abort_timeout_sec": 1, 00:05:26.746 "ack_timeout": 0, 00:05:26.746 "data_wr_pool_size": 0 00:05:26.746 } 00:05:26.746 } 00:05:26.746 ] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "nbd", 00:05:26.746 "config": [] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "ublk", 00:05:26.746 "config": [] 00:05:26.746 }, 00:05:26.746 { 00:05:26.746 "subsystem": "vhost_blk", 00:05:26.747 "config": [] 00:05:26.747 }, 00:05:26.747 { 00:05:26.747 "subsystem": "scsi", 00:05:26.747 "config": null 00:05:26.747 }, 00:05:26.747 { 00:05:26.747 "subsystem": "iscsi", 00:05:26.747 "config": [ 00:05:26.747 { 00:05:26.747 "method": "iscsi_set_options", 00:05:26.747 "params": { 00:05:26.747 "node_base": "iqn.2016-06.io.spdk", 00:05:26.747 "max_sessions": 128, 00:05:26.747 "max_connections_per_session": 2, 00:05:26.747 "max_queue_depth": 64, 00:05:26.747 "default_time2wait": 2, 00:05:26.747 "default_time2retain": 20, 00:05:26.747 "first_burst_length": 8192, 00:05:26.747 "immediate_data": true, 00:05:26.747 "allow_duplicated_isid": false, 00:05:26.747 "error_recovery_level": 0, 00:05:26.747 "nop_timeout": 60, 00:05:26.747 "nop_in_interval": 30, 00:05:26.747 "disable_chap": false, 00:05:26.747 "require_chap": false, 00:05:26.747 "mutual_chap": false, 00:05:26.747 "chap_group": 0, 00:05:26.747 "max_large_datain_per_connection": 64, 00:05:26.747 "max_r2t_per_connection": 4, 00:05:26.747 "pdu_pool_size": 36864, 00:05:26.747 "immediate_data_pool_size": 16384, 00:05:26.747 "data_out_pool_size": 2048 00:05:26.747 } 00:05:26.747 } 00:05:26.747 ] 00:05:26.747 }, 00:05:26.747 { 00:05:26.747 "subsystem": "vhost_scsi", 00:05:26.747 "config": [] 00:05:26.747 } 00:05:26.747 ] 00:05:26.747 } 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2727113 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2727113 ']' 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2727113 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727113 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727113' 00:05:26.747 killing process with pid 2727113 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2727113 00:05:26.747 18:55:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2727113 00:05:27.006 18:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2727136 00:05:27.006 18:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:27.006 18:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2727136 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2727136 ']' 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2727136 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727136 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727136' 00:05:32.274 killing process with pid 2727136 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2727136 00:05:32.274 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2727136 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:32.534 00:05:32.534 real 0m6.260s 00:05:32.534 user 0m5.916s 00:05:32.534 sys 0m0.639s 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 ************************************ 00:05:32.534 END TEST skip_rpc_with_json 00:05:32.534 ************************************ 00:05:32.534 18:55:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:32.534 18:55:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.534 18:55:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.534 18:55:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 ************************************ 00:05:32.534 START TEST skip_rpc_with_delay 00:05:32.534 ************************************ 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.534 [2024-11-26 18:55:49.630641] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.534 00:05:32.534 real 0m0.045s 00:05:32.534 user 0m0.020s 00:05:32.534 sys 0m0.024s 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.534 18:55:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 ************************************ 00:05:32.534 END TEST skip_rpc_with_delay 00:05:32.534 ************************************ 00:05:32.534 18:55:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:32.534 18:55:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:32.534 18:55:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:32.534 18:55:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.534 18:55:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.534 18:55:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.534 ************************************ 00:05:32.534 START TEST exit_on_failed_rpc_init 00:05:32.534 ************************************ 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2727973 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2727973 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2727973 ']' 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.534 18:55:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.793 [2024-11-26 18:55:49.753720] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:32.793 [2024-11-26 18:55:49.753776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727973 ] 00:05:32.793 [2024-11-26 18:55:49.825424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.793 [2024-11-26 18:55:49.873836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:33.052 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.052 [2024-11-26 18:55:50.115581] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:33.052 [2024-11-26 18:55:50.115633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728057 ] 00:05:33.052 [2024-11-26 18:55:50.185320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.052 [2024-11-26 18:55:50.230616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.052 [2024-11-26 18:55:50.230692] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:33.052 [2024-11-26 18:55:50.230706] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:33.052 [2024-11-26 18:55:50.230714] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2727973 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2727973 ']' 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2727973 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727973 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727973' 00:05:33.312 killing process with pid 2727973 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2727973 00:05:33.312 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2727973 00:05:33.571 00:05:33.571 real 0m0.899s 00:05:33.571 user 0m0.933s 00:05:33.571 sys 0m0.391s 00:05:33.571 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.571 18:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.571 ************************************ 00:05:33.571 END TEST exit_on_failed_rpc_init 00:05:33.571 ************************************ 00:05:33.571 18:55:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:33.571 00:05:33.571 real 0m13.069s 00:05:33.571 user 0m12.192s 00:05:33.571 sys 0m1.671s 00:05:33.571 18:55:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.571 18:55:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.571 ************************************ 00:05:33.571 END TEST skip_rpc 00:05:33.571 ************************************ 00:05:33.571 18:55:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.571 18:55:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.571 18:55:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.571 18:55:50 -- common/autotest_common.sh@10 -- # set +x 00:05:33.571 ************************************ 00:05:33.571 START TEST rpc_client 00:05:33.571 ************************************ 00:05:33.571 18:55:50 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.831 * Looking for test storage... 00:05:33.831 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.831 18:55:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.831 --rc genhtml_branch_coverage=1 00:05:33.831 --rc genhtml_function_coverage=1 00:05:33.831 --rc genhtml_legend=1 00:05:33.831 --rc geninfo_all_blocks=1 00:05:33.831 --rc geninfo_unexecuted_blocks=1 00:05:33.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.831 ' 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.831 --rc genhtml_branch_coverage=1 00:05:33.831 --rc genhtml_function_coverage=1 00:05:33.831 --rc genhtml_legend=1 00:05:33.831 --rc geninfo_all_blocks=1 00:05:33.831 --rc geninfo_unexecuted_blocks=1 00:05:33.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.831 ' 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.831 --rc genhtml_branch_coverage=1 00:05:33.831 --rc genhtml_function_coverage=1 00:05:33.831 --rc genhtml_legend=1 00:05:33.831 --rc geninfo_all_blocks=1 00:05:33.831 --rc geninfo_unexecuted_blocks=1 00:05:33.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.831 ' 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.831 --rc genhtml_branch_coverage=1 00:05:33.831 --rc genhtml_function_coverage=1 00:05:33.831 --rc genhtml_legend=1 00:05:33.831 --rc geninfo_all_blocks=1 00:05:33.831 --rc geninfo_unexecuted_blocks=1 00:05:33.831 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.831 ' 00:05:33.831 18:55:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:33.831 OK 00:05:33.831 18:55:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.831 00:05:33.831 real 0m0.219s 00:05:33.831 user 0m0.120s 00:05:33.831 sys 0m0.117s 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.831 18:55:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.831 ************************************ 00:05:33.831 END TEST rpc_client 00:05:33.831 ************************************ 00:05:33.831 18:55:51 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.831 18:55:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.831 18:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.831 18:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:34.091 ************************************ 00:05:34.091 START TEST json_config 00:05:34.091 ************************************ 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.091 18:55:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.091 18:55:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.091 18:55:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.091 18:55:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.091 18:55:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.091 18:55:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.091 18:55:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.091 18:55:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:34.091 18:55:51 json_config -- scripts/common.sh@345 -- # : 1 00:05:34.091 18:55:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.091 18:55:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.091 18:55:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:34.091 18:55:51 json_config -- scripts/common.sh@353 -- # local d=1 00:05:34.091 18:55:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.091 18:55:51 json_config -- scripts/common.sh@355 -- # echo 1 00:05:34.091 18:55:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.091 18:55:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@353 -- # local d=2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.091 18:55:51 json_config -- scripts/common.sh@355 -- # echo 2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.091 18:55:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.091 18:55:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.091 18:55:51 json_config -- scripts/common.sh@368 -- # return 0 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.091 --rc genhtml_branch_coverage=1 00:05:34.091 --rc genhtml_function_coverage=1 00:05:34.091 --rc genhtml_legend=1 00:05:34.091 --rc geninfo_all_blocks=1 00:05:34.091 --rc geninfo_unexecuted_blocks=1 00:05:34.091 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.091 ' 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.091 --rc genhtml_branch_coverage=1 00:05:34.091 --rc genhtml_function_coverage=1 00:05:34.091 --rc genhtml_legend=1 00:05:34.091 --rc geninfo_all_blocks=1 00:05:34.091 --rc geninfo_unexecuted_blocks=1 00:05:34.091 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.091 ' 00:05:34.091 18:55:51 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.092 --rc genhtml_branch_coverage=1 00:05:34.092 --rc genhtml_function_coverage=1 00:05:34.092 --rc genhtml_legend=1 00:05:34.092 --rc geninfo_all_blocks=1 00:05:34.092 --rc geninfo_unexecuted_blocks=1 00:05:34.092 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.092 ' 00:05:34.092 18:55:51 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.092 --rc genhtml_branch_coverage=1 00:05:34.092 --rc genhtml_function_coverage=1 00:05:34.092 --rc genhtml_legend=1 00:05:34.092 --rc geninfo_all_blocks=1 00:05:34.092 --rc geninfo_unexecuted_blocks=1 00:05:34.092 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.092 ' 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:34.092 18:55:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.092 18:55:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.092 18:55:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.092 18:55:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.092 18:55:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.092 18:55:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.092 18:55:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.092 18:55:51 json_config -- paths/export.sh@5 -- # export PATH 00:05:34.092 18:55:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@51 -- # : 0 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.092 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.092 18:55:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:34.092 WARNING: No tests are enabled so not running JSON configuration tests 00:05:34.092 18:55:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:34.092 00:05:34.092 real 0m0.199s 00:05:34.092 user 0m0.108s 00:05:34.092 sys 0m0.100s 00:05:34.092 18:55:51 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.092 18:55:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.092 ************************************ 00:05:34.092 END TEST json_config 00:05:34.092 ************************************ 00:05:34.092 18:55:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.092 18:55:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.092 18:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.092 18:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:34.352 ************************************ 00:05:34.352 START TEST json_config_extra_key 00:05:34.352 ************************************ 00:05:34.352 18:55:51 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.352 18:55:51 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.352 18:55:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.352 18:55:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.352 18:55:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.352 18:55:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:34.353 18:55:51 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.353 18:55:51 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.353 --rc genhtml_branch_coverage=1 00:05:34.353 --rc genhtml_function_coverage=1 00:05:34.353 --rc genhtml_legend=1 00:05:34.353 --rc geninfo_all_blocks=1 00:05:34.353 --rc geninfo_unexecuted_blocks=1 00:05:34.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.353 ' 00:05:34.353 18:55:51 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.353 --rc genhtml_branch_coverage=1 00:05:34.353 --rc genhtml_function_coverage=1 00:05:34.353 --rc genhtml_legend=1 00:05:34.353 --rc geninfo_all_blocks=1 00:05:34.353 --rc geninfo_unexecuted_blocks=1 00:05:34.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.353 ' 00:05:34.353 18:55:51 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.353 --rc genhtml_branch_coverage=1 00:05:34.353 --rc genhtml_function_coverage=1 00:05:34.353 --rc genhtml_legend=1 00:05:34.353 --rc geninfo_all_blocks=1 00:05:34.353 --rc geninfo_unexecuted_blocks=1 00:05:34.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.353 ' 00:05:34.353 18:55:51 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.353 --rc genhtml_branch_coverage=1 00:05:34.353 --rc genhtml_function_coverage=1 00:05:34.353 --rc genhtml_legend=1 00:05:34.353 --rc geninfo_all_blocks=1 00:05:34.353 --rc geninfo_unexecuted_blocks=1 00:05:34.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.353 ' 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.353 18:55:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.353 18:55:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.353 18:55:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.353 18:55:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.353 18:55:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:34.353 18:55:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.353 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.353 18:55:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.353 INFO: launching applications... 00:05:34.353 18:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2728403 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.353 Waiting for target to run... 00:05:34.353 18:55:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2728403 /var/tmp/spdk_tgt.sock 00:05:34.353 18:55:51 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2728403 ']' 00:05:34.354 18:55:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.354 18:55:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.354 18:55:51 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.354 18:55:51 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.354 18:55:51 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.354 18:55:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.354 [2024-11-26 18:55:51.546351] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:34.354 [2024-11-26 18:55:51.546419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728403 ] 00:05:34.921 [2024-11-26 18:55:51.995647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.921 [2024-11-26 18:55:52.043582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.488 18:55:52 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.488 18:55:52 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.488 00:05:35.488 18:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.488 INFO: shutting down applications... 00:05:35.488 18:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2728403 ]] 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2728403 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2728403 00:05:35.488 18:55:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.748 18:55:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.748 18:55:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.748 18:55:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2728403 00:05:35.748 18:55:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.748 18:55:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:35.748 18:55:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.748 18:55:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.748 SPDK target shutdown done 00:05:35.748 18:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:35.748 Success 00:05:35.748 00:05:35.748 real 0m1.594s 00:05:35.748 user 0m1.185s 00:05:35.748 sys 0m0.599s 00:05:35.748 18:55:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.748 18:55:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.748 ************************************ 00:05:35.748 END TEST json_config_extra_key 00:05:35.748 ************************************ 00:05:35.748 18:55:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.748 18:55:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.748 18:55:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.006 18:55:52 -- common/autotest_common.sh@10 -- # set +x 00:05:36.006 ************************************ 00:05:36.006 START TEST alias_rpc 00:05:36.006 ************************************ 00:05:36.006 18:55:52 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.006 * Looking for test storage... 00:05:36.006 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:36.006 18:55:53 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.006 18:55:53 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.006 18:55:53 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.006 18:55:53 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.006 18:55:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.007 18:55:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.007 --rc genhtml_branch_coverage=1 00:05:36.007 --rc genhtml_function_coverage=1 00:05:36.007 --rc genhtml_legend=1 00:05:36.007 --rc geninfo_all_blocks=1 00:05:36.007 --rc geninfo_unexecuted_blocks=1 00:05:36.007 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.007 ' 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.007 --rc genhtml_branch_coverage=1 00:05:36.007 --rc genhtml_function_coverage=1 00:05:36.007 --rc genhtml_legend=1 00:05:36.007 --rc geninfo_all_blocks=1 00:05:36.007 --rc geninfo_unexecuted_blocks=1 00:05:36.007 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.007 ' 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.007 --rc genhtml_branch_coverage=1 00:05:36.007 --rc genhtml_function_coverage=1 00:05:36.007 --rc genhtml_legend=1 00:05:36.007 --rc geninfo_all_blocks=1 00:05:36.007 --rc geninfo_unexecuted_blocks=1 00:05:36.007 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.007 ' 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.007 --rc genhtml_branch_coverage=1 00:05:36.007 --rc genhtml_function_coverage=1 00:05:36.007 --rc genhtml_legend=1 00:05:36.007 --rc geninfo_all_blocks=1 00:05:36.007 --rc geninfo_unexecuted_blocks=1 00:05:36.007 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.007 ' 00:05:36.007 18:55:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.007 18:55:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2728641 00:05:36.007 18:55:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.007 18:55:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2728641 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2728641 ']' 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.007 18:55:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.007 [2024-11-26 18:55:53.215004] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:36.007 [2024-11-26 18:55:53.215069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728641 ] 00:05:36.266 [2024-11-26 18:55:53.285078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.266 [2024-11-26 18:55:53.332636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.524 18:55:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.524 18:55:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.524 18:55:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:36.783 18:55:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2728641 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2728641 ']' 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2728641 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728641 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728641' 00:05:36.783 killing process with pid 2728641 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 2728641 00:05:36.783 18:55:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 2728641 00:05:37.042 00:05:37.042 real 0m1.124s 00:05:37.042 user 0m1.121s 00:05:37.042 sys 0m0.439s 00:05:37.042 18:55:54 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.042 18:55:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.042 ************************************ 00:05:37.042 END TEST alias_rpc 00:05:37.042 ************************************ 00:05:37.042 18:55:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:37.042 18:55:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.042 18:55:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.042 18:55:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.042 18:55:54 -- common/autotest_common.sh@10 -- # set +x 00:05:37.042 ************************************ 00:05:37.042 START TEST spdkcli_tcp 00:05:37.042 ************************************ 00:05:37.042 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.301 * Looking for test storage... 00:05:37.301 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.301 18:55:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.301 --rc genhtml_branch_coverage=1 00:05:37.301 --rc genhtml_function_coverage=1 00:05:37.301 --rc genhtml_legend=1 00:05:37.301 --rc geninfo_all_blocks=1 00:05:37.301 --rc geninfo_unexecuted_blocks=1 00:05:37.301 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.301 ' 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.301 --rc genhtml_branch_coverage=1 00:05:37.301 --rc genhtml_function_coverage=1 00:05:37.301 --rc genhtml_legend=1 00:05:37.301 --rc geninfo_all_blocks=1 00:05:37.301 --rc geninfo_unexecuted_blocks=1 00:05:37.301 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.301 ' 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.301 --rc genhtml_branch_coverage=1 00:05:37.301 --rc genhtml_function_coverage=1 00:05:37.301 --rc genhtml_legend=1 00:05:37.301 --rc geninfo_all_blocks=1 00:05:37.301 --rc geninfo_unexecuted_blocks=1 00:05:37.301 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.301 ' 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.301 --rc genhtml_branch_coverage=1 00:05:37.301 --rc genhtml_function_coverage=1 00:05:37.301 --rc genhtml_legend=1 00:05:37.301 --rc geninfo_all_blocks=1 00:05:37.301 --rc geninfo_unexecuted_blocks=1 00:05:37.301 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.301 ' 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2728875 00:05:37.301 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2728875 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2728875 ']' 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.301 18:55:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.301 [2024-11-26 18:55:54.406269] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:37.301 [2024-11-26 18:55:54.406321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728875 ] 00:05:37.301 [2024-11-26 18:55:54.475395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.559 [2024-11-26 18:55:54.526412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.559 [2024-11-26 18:55:54.526415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.559 18:55:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.559 18:55:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:37.559 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2728886 00:05:37.559 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:37.559 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:37.816 [ 00:05:37.816 "spdk_get_version", 00:05:37.816 "rpc_get_methods", 00:05:37.816 "notify_get_notifications", 00:05:37.816 "notify_get_types", 00:05:37.816 "trace_get_info", 00:05:37.816 "trace_get_tpoint_group_mask", 00:05:37.816 "trace_disable_tpoint_group", 00:05:37.816 "trace_enable_tpoint_group", 00:05:37.816 "trace_clear_tpoint_mask", 00:05:37.816 "trace_set_tpoint_mask", 00:05:37.816 "fsdev_set_opts", 00:05:37.816 "fsdev_get_opts", 00:05:37.816 "framework_get_pci_devices", 00:05:37.816 "framework_get_config", 00:05:37.816 "framework_get_subsystems", 00:05:37.816 "vfu_tgt_set_base_path", 00:05:37.816 "keyring_get_keys", 00:05:37.816 "iobuf_get_stats", 00:05:37.816 "iobuf_set_options", 00:05:37.816 "sock_get_default_impl", 00:05:37.816 "sock_set_default_impl", 00:05:37.816 "sock_impl_set_options", 00:05:37.816 "sock_impl_get_options", 00:05:37.816 "vmd_rescan", 00:05:37.816 "vmd_remove_device", 00:05:37.816 "vmd_enable", 00:05:37.816 "accel_get_stats", 00:05:37.816 "accel_set_options", 00:05:37.816 "accel_set_driver", 00:05:37.816 "accel_crypto_key_destroy", 00:05:37.816 "accel_crypto_keys_get", 00:05:37.816 "accel_crypto_key_create", 00:05:37.816 "accel_assign_opc", 00:05:37.816 "accel_get_module_info", 00:05:37.816 "accel_get_opc_assignments", 00:05:37.816 "bdev_get_histogram", 00:05:37.816 "bdev_enable_histogram", 00:05:37.816 "bdev_set_qos_limit", 00:05:37.816 "bdev_set_qd_sampling_period", 00:05:37.816 "bdev_get_bdevs", 00:05:37.816 "bdev_reset_iostat", 00:05:37.816 "bdev_get_iostat", 00:05:37.816 "bdev_examine", 00:05:37.816 "bdev_wait_for_examine", 00:05:37.816 "bdev_set_options", 00:05:37.816 "scsi_get_devices", 00:05:37.816 "thread_set_cpumask", 00:05:37.816 "scheduler_set_options", 00:05:37.816 "framework_get_governor", 00:05:37.816 "framework_get_scheduler", 00:05:37.816 "framework_set_scheduler", 00:05:37.816 "framework_get_reactors", 00:05:37.816 "thread_get_io_channels", 00:05:37.816 "thread_get_pollers", 00:05:37.816 "thread_get_stats", 00:05:37.816 "framework_monitor_context_switch", 00:05:37.816 "spdk_kill_instance", 00:05:37.816 "log_enable_timestamps", 00:05:37.816 "log_get_flags", 00:05:37.816 "log_clear_flag", 00:05:37.816 "log_set_flag", 00:05:37.816 "log_get_level", 00:05:37.816 "log_set_level", 00:05:37.816 "log_get_print_level", 00:05:37.816 "log_set_print_level", 00:05:37.816 "framework_enable_cpumask_locks", 00:05:37.816 "framework_disable_cpumask_locks", 00:05:37.816 "framework_wait_init", 00:05:37.816 "framework_start_init", 00:05:37.816 "virtio_blk_create_transport", 00:05:37.816 "virtio_blk_get_transports", 00:05:37.816 "vhost_controller_set_coalescing", 00:05:37.816 "vhost_get_controllers", 00:05:37.816 "vhost_delete_controller", 00:05:37.816 "vhost_create_blk_controller", 00:05:37.816 "vhost_scsi_controller_remove_target", 00:05:37.816 "vhost_scsi_controller_add_target", 00:05:37.816 "vhost_start_scsi_controller", 00:05:37.816 "vhost_create_scsi_controller", 00:05:37.816 "ublk_recover_disk", 00:05:37.816 "ublk_get_disks", 00:05:37.816 "ublk_stop_disk", 00:05:37.816 "ublk_start_disk", 00:05:37.816 "ublk_destroy_target", 00:05:37.816 "ublk_create_target", 00:05:37.816 "nbd_get_disks", 00:05:37.816 "nbd_stop_disk", 00:05:37.816 "nbd_start_disk", 00:05:37.816 "env_dpdk_get_mem_stats", 00:05:37.816 "nvmf_stop_mdns_prr", 00:05:37.816 "nvmf_publish_mdns_prr", 00:05:37.816 "nvmf_subsystem_get_listeners", 00:05:37.816 "nvmf_subsystem_get_qpairs", 00:05:37.816 "nvmf_subsystem_get_controllers", 00:05:37.816 "nvmf_get_stats", 00:05:37.816 "nvmf_get_transports", 00:05:37.816 "nvmf_create_transport", 00:05:37.816 "nvmf_get_targets", 00:05:37.816 "nvmf_delete_target", 00:05:37.816 "nvmf_create_target", 00:05:37.816 "nvmf_subsystem_allow_any_host", 00:05:37.816 "nvmf_subsystem_set_keys", 00:05:37.816 "nvmf_subsystem_remove_host", 00:05:37.816 "nvmf_subsystem_add_host", 00:05:37.816 "nvmf_ns_remove_host", 00:05:37.816 "nvmf_ns_add_host", 00:05:37.816 "nvmf_subsystem_remove_ns", 00:05:37.816 "nvmf_subsystem_set_ns_ana_group", 00:05:37.816 "nvmf_subsystem_add_ns", 00:05:37.816 "nvmf_subsystem_listener_set_ana_state", 00:05:37.816 "nvmf_discovery_get_referrals", 00:05:37.816 "nvmf_discovery_remove_referral", 00:05:37.816 "nvmf_discovery_add_referral", 00:05:37.816 "nvmf_subsystem_remove_listener", 00:05:37.816 "nvmf_subsystem_add_listener", 00:05:37.816 "nvmf_delete_subsystem", 00:05:37.816 "nvmf_create_subsystem", 00:05:37.816 "nvmf_get_subsystems", 00:05:37.816 "nvmf_set_crdt", 00:05:37.816 "nvmf_set_config", 00:05:37.816 "nvmf_set_max_subsystems", 00:05:37.816 "iscsi_get_histogram", 00:05:37.816 "iscsi_enable_histogram", 00:05:37.816 "iscsi_set_options", 00:05:37.816 "iscsi_get_auth_groups", 00:05:37.816 "iscsi_auth_group_remove_secret", 00:05:37.816 "iscsi_auth_group_add_secret", 00:05:37.816 "iscsi_delete_auth_group", 00:05:37.816 "iscsi_create_auth_group", 00:05:37.816 "iscsi_set_discovery_auth", 00:05:37.816 "iscsi_get_options", 00:05:37.816 "iscsi_target_node_request_logout", 00:05:37.816 "iscsi_target_node_set_redirect", 00:05:37.816 "iscsi_target_node_set_auth", 00:05:37.816 "iscsi_target_node_add_lun", 00:05:37.816 "iscsi_get_stats", 00:05:37.816 "iscsi_get_connections", 00:05:37.816 "iscsi_portal_group_set_auth", 00:05:37.816 "iscsi_start_portal_group", 00:05:37.816 "iscsi_delete_portal_group", 00:05:37.816 "iscsi_create_portal_group", 00:05:37.816 "iscsi_get_portal_groups", 00:05:37.816 "iscsi_delete_target_node", 00:05:37.816 "iscsi_target_node_remove_pg_ig_maps", 00:05:37.816 "iscsi_target_node_add_pg_ig_maps", 00:05:37.816 "iscsi_create_target_node", 00:05:37.816 "iscsi_get_target_nodes", 00:05:37.816 "iscsi_delete_initiator_group", 00:05:37.816 "iscsi_initiator_group_remove_initiators", 00:05:37.816 "iscsi_initiator_group_add_initiators", 00:05:37.816 "iscsi_create_initiator_group", 00:05:37.816 "iscsi_get_initiator_groups", 00:05:37.816 "fsdev_aio_delete", 00:05:37.816 "fsdev_aio_create", 00:05:37.816 "keyring_linux_set_options", 00:05:37.816 "keyring_file_remove_key", 00:05:37.816 "keyring_file_add_key", 00:05:37.816 "vfu_virtio_create_fs_endpoint", 00:05:37.816 "vfu_virtio_create_scsi_endpoint", 00:05:37.816 "vfu_virtio_scsi_remove_target", 00:05:37.816 "vfu_virtio_scsi_add_target", 00:05:37.816 "vfu_virtio_create_blk_endpoint", 00:05:37.816 "vfu_virtio_delete_endpoint", 00:05:37.816 "iaa_scan_accel_module", 00:05:37.816 "dsa_scan_accel_module", 00:05:37.816 "ioat_scan_accel_module", 00:05:37.816 "accel_error_inject_error", 00:05:37.816 "bdev_iscsi_delete", 00:05:37.816 "bdev_iscsi_create", 00:05:37.816 "bdev_iscsi_set_options", 00:05:37.816 "bdev_virtio_attach_controller", 00:05:37.816 "bdev_virtio_scsi_get_devices", 00:05:37.816 "bdev_virtio_detach_controller", 00:05:37.816 "bdev_virtio_blk_set_hotplug", 00:05:37.816 "bdev_ftl_set_property", 00:05:37.816 "bdev_ftl_get_properties", 00:05:37.816 "bdev_ftl_get_stats", 00:05:37.816 "bdev_ftl_unmap", 00:05:37.816 "bdev_ftl_unload", 00:05:37.816 "bdev_ftl_delete", 00:05:37.816 "bdev_ftl_load", 00:05:37.816 "bdev_ftl_create", 00:05:37.816 "bdev_aio_delete", 00:05:37.816 "bdev_aio_rescan", 00:05:37.816 "bdev_aio_create", 00:05:37.816 "blobfs_create", 00:05:37.816 "blobfs_detect", 00:05:37.816 "blobfs_set_cache_size", 00:05:37.816 "bdev_zone_block_delete", 00:05:37.816 "bdev_zone_block_create", 00:05:37.816 "bdev_delay_delete", 00:05:37.817 "bdev_delay_create", 00:05:37.817 "bdev_delay_update_latency", 00:05:37.817 "bdev_split_delete", 00:05:37.817 "bdev_split_create", 00:05:37.817 "bdev_error_inject_error", 00:05:37.817 "bdev_error_delete", 00:05:37.817 "bdev_error_create", 00:05:37.817 "bdev_raid_set_options", 00:05:37.817 "bdev_raid_remove_base_bdev", 00:05:37.817 "bdev_raid_add_base_bdev", 00:05:37.817 "bdev_raid_delete", 00:05:37.817 "bdev_raid_create", 00:05:37.817 "bdev_raid_get_bdevs", 00:05:37.817 "bdev_lvol_set_parent_bdev", 00:05:37.817 "bdev_lvol_set_parent", 00:05:37.817 "bdev_lvol_check_shallow_copy", 00:05:37.817 "bdev_lvol_start_shallow_copy", 00:05:37.817 "bdev_lvol_grow_lvstore", 00:05:37.817 "bdev_lvol_get_lvols", 00:05:37.817 "bdev_lvol_get_lvstores", 00:05:37.817 "bdev_lvol_delete", 00:05:37.817 "bdev_lvol_set_read_only", 00:05:37.817 "bdev_lvol_resize", 00:05:37.817 "bdev_lvol_decouple_parent", 00:05:37.817 "bdev_lvol_inflate", 00:05:37.817 "bdev_lvol_rename", 00:05:37.817 "bdev_lvol_clone_bdev", 00:05:37.817 "bdev_lvol_clone", 00:05:37.817 "bdev_lvol_snapshot", 00:05:37.817 "bdev_lvol_create", 00:05:37.817 "bdev_lvol_delete_lvstore", 00:05:37.817 "bdev_lvol_rename_lvstore", 00:05:37.817 "bdev_lvol_create_lvstore", 00:05:37.817 "bdev_passthru_delete", 00:05:37.817 "bdev_passthru_create", 00:05:37.817 "bdev_nvme_cuse_unregister", 00:05:37.817 "bdev_nvme_cuse_register", 00:05:37.817 "bdev_opal_new_user", 00:05:37.817 "bdev_opal_set_lock_state", 00:05:37.817 "bdev_opal_delete", 00:05:37.817 "bdev_opal_get_info", 00:05:37.817 "bdev_opal_create", 00:05:37.817 "bdev_nvme_opal_revert", 00:05:37.817 "bdev_nvme_opal_init", 00:05:37.817 "bdev_nvme_send_cmd", 00:05:37.817 "bdev_nvme_set_keys", 00:05:37.817 "bdev_nvme_get_path_iostat", 00:05:37.817 "bdev_nvme_get_mdns_discovery_info", 00:05:37.817 "bdev_nvme_stop_mdns_discovery", 00:05:37.817 "bdev_nvme_start_mdns_discovery", 00:05:37.817 "bdev_nvme_set_multipath_policy", 00:05:37.817 "bdev_nvme_set_preferred_path", 00:05:37.817 "bdev_nvme_get_io_paths", 00:05:37.817 "bdev_nvme_remove_error_injection", 00:05:37.817 "bdev_nvme_add_error_injection", 00:05:37.817 "bdev_nvme_get_discovery_info", 00:05:37.817 "bdev_nvme_stop_discovery", 00:05:37.817 "bdev_nvme_start_discovery", 00:05:37.817 "bdev_nvme_get_controller_health_info", 00:05:37.817 "bdev_nvme_disable_controller", 00:05:37.817 "bdev_nvme_enable_controller", 00:05:37.817 "bdev_nvme_reset_controller", 00:05:37.817 "bdev_nvme_get_transport_statistics", 00:05:37.817 "bdev_nvme_apply_firmware", 00:05:37.817 "bdev_nvme_detach_controller", 00:05:37.817 "bdev_nvme_get_controllers", 00:05:37.817 "bdev_nvme_attach_controller", 00:05:37.817 "bdev_nvme_set_hotplug", 00:05:37.817 "bdev_nvme_set_options", 00:05:37.817 "bdev_null_resize", 00:05:37.817 "bdev_null_delete", 00:05:37.817 "bdev_null_create", 00:05:37.817 "bdev_malloc_delete", 00:05:37.817 "bdev_malloc_create" 00:05:37.817 ] 00:05:37.817 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:37.817 18:55:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.817 18:55:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.817 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:37.817 18:55:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2728875 00:05:37.817 18:55:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2728875 ']' 00:05:37.817 18:55:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2728875 00:05:37.817 18:55:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:37.817 18:55:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.817 18:55:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728875 00:05:37.817 18:55:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.074 18:55:55 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.074 18:55:55 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728875' 00:05:38.074 killing process with pid 2728875 00:05:38.074 18:55:55 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2728875 00:05:38.074 18:55:55 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2728875 00:05:38.332 00:05:38.332 real 0m1.134s 00:05:38.332 user 0m1.915s 00:05:38.332 sys 0m0.469s 00:05:38.332 18:55:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.332 18:55:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.332 ************************************ 00:05:38.332 END TEST spdkcli_tcp 00:05:38.332 ************************************ 00:05:38.332 18:55:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.332 18:55:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.332 18:55:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.332 18:55:55 -- common/autotest_common.sh@10 -- # set +x 00:05:38.332 ************************************ 00:05:38.332 START TEST dpdk_mem_utility 00:05:38.332 ************************************ 00:05:38.332 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.332 * Looking for test storage... 00:05:38.332 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.332 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.332 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.332 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.590 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.590 18:55:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:38.590 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.590 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.590 --rc genhtml_branch_coverage=1 00:05:38.590 --rc genhtml_function_coverage=1 00:05:38.590 --rc genhtml_legend=1 00:05:38.590 --rc geninfo_all_blocks=1 00:05:38.590 --rc geninfo_unexecuted_blocks=1 00:05:38.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.590 ' 00:05:38.590 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.590 --rc genhtml_branch_coverage=1 00:05:38.590 --rc genhtml_function_coverage=1 00:05:38.590 --rc genhtml_legend=1 00:05:38.590 --rc geninfo_all_blocks=1 00:05:38.590 --rc geninfo_unexecuted_blocks=1 00:05:38.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.590 ' 00:05:38.590 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.590 --rc genhtml_branch_coverage=1 00:05:38.591 --rc genhtml_function_coverage=1 00:05:38.591 --rc genhtml_legend=1 00:05:38.591 --rc geninfo_all_blocks=1 00:05:38.591 --rc geninfo_unexecuted_blocks=1 00:05:38.591 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.591 ' 00:05:38.591 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.591 --rc genhtml_branch_coverage=1 00:05:38.591 --rc genhtml_function_coverage=1 00:05:38.591 --rc genhtml_legend=1 00:05:38.591 --rc geninfo_all_blocks=1 00:05:38.591 --rc geninfo_unexecuted_blocks=1 00:05:38.591 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.591 ' 00:05:38.591 18:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.591 18:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2729132 00:05:38.591 18:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2729132 00:05:38.591 18:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.591 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2729132 ']' 00:05:38.591 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.591 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.591 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.591 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.591 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.591 [2024-11-26 18:55:55.627195] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:38.591 [2024-11-26 18:55:55.627278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729132 ] 00:05:38.591 [2024-11-26 18:55:55.698224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.591 [2024-11-26 18:55:55.742459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.850 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.850 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:38.850 18:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:38.850 18:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:38.850 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.850 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.850 { 00:05:38.850 "filename": "/tmp/spdk_mem_dump.txt" 00:05:38.850 } 00:05:38.850 18:55:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.850 18:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.850 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:38.850 1 heaps totaling size 818.000000 MiB 00:05:38.850 size: 818.000000 MiB heap id: 0 00:05:38.850 end heaps---------- 00:05:38.850 9 mempools totaling size 603.782043 MiB 00:05:38.850 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:38.850 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:38.850 size: 100.555481 MiB name: bdev_io_2729132 00:05:38.850 size: 50.003479 MiB name: msgpool_2729132 00:05:38.850 size: 36.509338 MiB name: fsdev_io_2729132 00:05:38.850 size: 21.763794 MiB name: PDU_Pool 00:05:38.850 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:38.850 size: 4.133484 MiB name: evtpool_2729132 00:05:38.850 size: 0.026123 MiB name: Session_Pool 00:05:38.850 end mempools------- 00:05:38.850 6 memzones totaling size 4.142822 MiB 00:05:38.850 size: 1.000366 MiB name: RG_ring_0_2729132 00:05:38.850 size: 1.000366 MiB name: RG_ring_1_2729132 00:05:38.850 size: 1.000366 MiB name: RG_ring_4_2729132 00:05:38.850 size: 1.000366 MiB name: RG_ring_5_2729132 00:05:38.850 size: 0.125366 MiB name: RG_ring_2_2729132 00:05:38.850 size: 0.015991 MiB name: RG_ring_3_2729132 00:05:38.850 end memzones------- 00:05:38.850 18:55:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:38.850 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:38.850 list of free elements. size: 10.852478 MiB 00:05:38.850 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:38.850 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:38.850 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:38.850 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:38.850 element at address: 0x200008000000 with size: 0.959839 MiB 00:05:38.850 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:38.850 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:38.850 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:38.850 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:38.850 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:38.850 element at address: 0x200003e00000 with size: 0.490723 MiB 00:05:38.850 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:38.850 element at address: 0x200010600000 with size: 0.481934 MiB 00:05:38.850 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:38.850 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:38.850 list of standard malloc elements. size: 199.218628 MiB 00:05:38.850 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:05:38.850 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:05:38.850 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:38.850 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:38.850 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:38.850 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:38.850 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:38.850 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:38.850 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:38.850 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20000085b100 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000008df880 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20001067b600 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:38.850 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:38.850 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:38.850 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:38.850 list of memzone associated elements. size: 607.928894 MiB 00:05:38.850 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:38.850 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:38.850 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:38.850 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:38.850 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:38.850 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2729132_0 00:05:38.850 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:38.850 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2729132_0 00:05:38.850 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:05:38.850 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2729132_0 00:05:38.850 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:38.850 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:38.850 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:38.850 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:38.850 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:38.850 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2729132_0 00:05:38.850 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:38.850 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2729132 00:05:38.850 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:38.850 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2729132 00:05:38.850 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:05:38.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:38.850 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:38.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:38.850 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:05:38.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:38.851 element at address: 0x200003efde40 with size: 1.008118 MiB 00:05:38.851 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:38.851 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:38.851 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2729132 00:05:38.851 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:38.851 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2729132 00:05:38.851 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:38.851 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2729132 00:05:38.851 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:38.851 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2729132 00:05:38.851 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:05:38.851 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2729132 00:05:38.851 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:38.851 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2729132 00:05:38.851 element at address: 0x20001067b780 with size: 0.500488 MiB 00:05:38.851 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:38.851 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:05:38.851 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:38.851 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:38.851 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:38.851 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:38.851 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2729132 00:05:38.851 element at address: 0x2000008df940 with size: 0.125488 MiB 00:05:38.851 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2729132 00:05:38.851 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:05:38.851 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:38.851 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:38.851 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:38.851 element at address: 0x2000008db680 with size: 0.016113 MiB 00:05:38.851 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2729132 00:05:38.851 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:38.851 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:38.851 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:38.851 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2729132 00:05:38.851 element at address: 0x2000008db480 with size: 0.000305 MiB 00:05:38.851 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2729132 00:05:38.851 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:38.851 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2729132 00:05:38.851 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:38.851 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:38.851 18:55:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:38.851 18:55:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2729132 00:05:38.851 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2729132 ']' 00:05:38.851 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2729132 00:05:38.851 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:38.851 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.851 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729132 00:05:39.109 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.109 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.109 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729132' 00:05:39.109 killing process with pid 2729132 00:05:39.109 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2729132 00:05:39.109 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2729132 00:05:39.366 00:05:39.366 real 0m0.995s 00:05:39.366 user 0m0.889s 00:05:39.366 sys 0m0.433s 00:05:39.366 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.366 18:55:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.366 ************************************ 00:05:39.366 END TEST dpdk_mem_utility 00:05:39.366 ************************************ 00:05:39.366 18:55:56 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:39.366 18:55:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.366 18:55:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.366 18:55:56 -- common/autotest_common.sh@10 -- # set +x 00:05:39.366 ************************************ 00:05:39.366 START TEST event 00:05:39.366 ************************************ 00:05:39.366 18:55:56 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:39.366 * Looking for test storage... 00:05:39.625 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.625 18:55:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.625 18:55:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.625 18:55:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.625 18:55:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.625 18:55:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.625 18:55:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.625 18:55:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.625 18:55:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.625 18:55:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.625 18:55:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.625 18:55:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.625 18:55:56 event -- scripts/common.sh@344 -- # case "$op" in 00:05:39.625 18:55:56 event -- scripts/common.sh@345 -- # : 1 00:05:39.625 18:55:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.625 18:55:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.625 18:55:56 event -- scripts/common.sh@365 -- # decimal 1 00:05:39.625 18:55:56 event -- scripts/common.sh@353 -- # local d=1 00:05:39.625 18:55:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.625 18:55:56 event -- scripts/common.sh@355 -- # echo 1 00:05:39.625 18:55:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.625 18:55:56 event -- scripts/common.sh@366 -- # decimal 2 00:05:39.625 18:55:56 event -- scripts/common.sh@353 -- # local d=2 00:05:39.625 18:55:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.625 18:55:56 event -- scripts/common.sh@355 -- # echo 2 00:05:39.625 18:55:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.625 18:55:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.625 18:55:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.625 18:55:56 event -- scripts/common.sh@368 -- # return 0 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.625 --rc genhtml_branch_coverage=1 00:05:39.625 --rc genhtml_function_coverage=1 00:05:39.625 --rc genhtml_legend=1 00:05:39.625 --rc geninfo_all_blocks=1 00:05:39.625 --rc geninfo_unexecuted_blocks=1 00:05:39.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.625 ' 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.625 --rc genhtml_branch_coverage=1 00:05:39.625 --rc genhtml_function_coverage=1 00:05:39.625 --rc genhtml_legend=1 00:05:39.625 --rc geninfo_all_blocks=1 00:05:39.625 --rc geninfo_unexecuted_blocks=1 00:05:39.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.625 ' 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.625 --rc genhtml_branch_coverage=1 00:05:39.625 --rc genhtml_function_coverage=1 00:05:39.625 --rc genhtml_legend=1 00:05:39.625 --rc geninfo_all_blocks=1 00:05:39.625 --rc geninfo_unexecuted_blocks=1 00:05:39.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.625 ' 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.625 --rc genhtml_branch_coverage=1 00:05:39.625 --rc genhtml_function_coverage=1 00:05:39.625 --rc genhtml_legend=1 00:05:39.625 --rc geninfo_all_blocks=1 00:05:39.625 --rc geninfo_unexecuted_blocks=1 00:05:39.625 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:39.625 ' 00:05:39.625 18:55:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:39.625 18:55:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.625 18:55:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:39.625 18:55:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.625 18:55:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.625 ************************************ 00:05:39.625 START TEST event_perf 00:05:39.625 ************************************ 00:05:39.625 18:55:56 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.625 Running I/O for 1 seconds...[2024-11-26 18:55:56.714143] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:39.625 [2024-11-26 18:55:56.714197] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729367 ] 00:05:39.625 [2024-11-26 18:55:56.783740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.625 [2024-11-26 18:55:56.831387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.625 [2024-11-26 18:55:56.831459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.625 [2024-11-26 18:55:56.831537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.625 [2024-11-26 18:55:56.831539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.999 Running I/O for 1 seconds... 00:05:40.999 lcore 0: 193515 00:05:40.999 lcore 1: 193516 00:05:40.999 lcore 2: 193515 00:05:40.999 lcore 3: 193514 00:05:40.999 done. 00:05:40.999 00:05:40.999 real 0m1.169s 00:05:40.999 user 0m4.097s 00:05:40.999 sys 0m0.068s 00:05:40.999 18:55:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.999 18:55:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.999 ************************************ 00:05:40.999 END TEST event_perf 00:05:40.999 ************************************ 00:05:40.999 18:55:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.999 18:55:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:40.999 18:55:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.999 18:55:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.999 ************************************ 00:05:40.999 START TEST event_reactor 00:05:40.999 ************************************ 00:05:40.999 18:55:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.999 [2024-11-26 18:55:57.955840] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:40.999 [2024-11-26 18:55:57.955881] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729563 ] 00:05:40.999 [2024-11-26 18:55:58.022195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.999 [2024-11-26 18:55:58.066623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.933 test_start 00:05:41.933 oneshot 00:05:41.933 tick 100 00:05:41.933 tick 100 00:05:41.933 tick 250 00:05:41.933 tick 100 00:05:41.933 tick 100 00:05:41.933 tick 100 00:05:41.933 tick 250 00:05:41.933 tick 500 00:05:41.933 tick 100 00:05:41.933 tick 100 00:05:41.933 tick 250 00:05:41.933 tick 100 00:05:41.933 tick 100 00:05:41.933 test_end 00:05:41.933 00:05:41.933 real 0m1.157s 00:05:41.933 user 0m1.084s 00:05:41.933 sys 0m0.070s 00:05:41.933 18:55:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.933 18:55:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:41.933 ************************************ 00:05:41.933 END TEST event_reactor 00:05:41.933 ************************************ 00:05:41.933 18:55:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.933 18:55:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:41.933 18:55:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.933 18:55:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.190 ************************************ 00:05:42.190 START TEST event_reactor_perf 00:05:42.190 ************************************ 00:05:42.190 18:55:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.190 [2024-11-26 18:55:59.188744] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:42.190 [2024-11-26 18:55:59.188825] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729757 ] 00:05:42.190 [2024-11-26 18:55:59.261397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.190 [2024-11-26 18:55:59.305357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.566 test_start 00:05:43.566 test_end 00:05:43.566 Performance: 909229 events per second 00:05:43.566 00:05:43.566 real 0m1.176s 00:05:43.566 user 0m1.089s 00:05:43.566 sys 0m0.083s 00:05:43.566 18:56:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.566 18:56:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.566 ************************************ 00:05:43.566 END TEST event_reactor_perf 00:05:43.566 ************************************ 00:05:43.566 18:56:00 event -- event/event.sh@49 -- # uname -s 00:05:43.566 18:56:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.566 18:56:00 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.566 18:56:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.566 18:56:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.566 18:56:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.566 ************************************ 00:05:43.566 START TEST event_scheduler 00:05:43.566 ************************************ 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.566 * Looking for test storage... 00:05:43.566 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.566 18:56:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.566 --rc genhtml_branch_coverage=1 00:05:43.566 --rc genhtml_function_coverage=1 00:05:43.566 --rc genhtml_legend=1 00:05:43.566 --rc geninfo_all_blocks=1 00:05:43.566 --rc geninfo_unexecuted_blocks=1 00:05:43.566 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.566 ' 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.566 --rc genhtml_branch_coverage=1 00:05:43.566 --rc genhtml_function_coverage=1 00:05:43.566 --rc genhtml_legend=1 00:05:43.566 --rc geninfo_all_blocks=1 00:05:43.566 --rc geninfo_unexecuted_blocks=1 00:05:43.566 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.566 ' 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.566 --rc genhtml_branch_coverage=1 00:05:43.566 --rc genhtml_function_coverage=1 00:05:43.566 --rc genhtml_legend=1 00:05:43.566 --rc geninfo_all_blocks=1 00:05:43.566 --rc geninfo_unexecuted_blocks=1 00:05:43.566 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.566 ' 00:05:43.566 18:56:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.566 --rc genhtml_branch_coverage=1 00:05:43.566 --rc genhtml_function_coverage=1 00:05:43.566 --rc genhtml_legend=1 00:05:43.566 --rc geninfo_all_blocks=1 00:05:43.566 --rc geninfo_unexecuted_blocks=1 00:05:43.566 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.566 ' 00:05:43.566 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:43.566 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2729992 00:05:43.567 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:43.567 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.567 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2729992 00:05:43.567 18:56:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2729992 ']' 00:05:43.567 18:56:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.567 18:56:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.567 18:56:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.567 18:56:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.567 18:56:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.567 [2024-11-26 18:56:00.618821] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:43.567 [2024-11-26 18:56:00.618891] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729992 ] 00:05:43.567 [2024-11-26 18:56:00.686141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.567 [2024-11-26 18:56:00.735193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.567 [2024-11-26 18:56:00.735269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.567 [2024-11-26 18:56:00.735346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.567 [2024-11-26 18:56:00.735348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:43.826 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 [2024-11-26 18:56:00.804012] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:43.826 [2024-11-26 18:56:00.804032] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:43.826 [2024-11-26 18:56:00.804044] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:43.826 [2024-11-26 18:56:00.804052] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:43.826 [2024-11-26 18:56:00.804059] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 [2024-11-26 18:56:00.878991] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 ************************************ 00:05:43.826 START TEST scheduler_create_thread 00:05:43.826 ************************************ 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 2 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 3 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 4 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 5 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 6 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 7 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 8 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 9 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 10 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.826 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.762 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.762 18:56:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:44.762 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.762 18:56:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.137 18:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.137 18:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.137 18:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.137 18:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.137 18:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.511 18:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.511 00:05:47.511 real 0m3.378s 00:05:47.511 user 0m0.024s 00:05:47.511 sys 0m0.007s 00:05:47.511 18:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.511 18:56:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.511 ************************************ 00:05:47.511 END TEST scheduler_create_thread 00:05:47.511 ************************************ 00:05:47.511 18:56:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:47.511 18:56:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2729992 00:05:47.511 18:56:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2729992 ']' 00:05:47.511 18:56:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2729992 00:05:47.511 18:56:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:47.511 18:56:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.511 18:56:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729992 00:05:47.511 18:56:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:47.512 18:56:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:47.512 18:56:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729992' 00:05:47.512 killing process with pid 2729992 00:05:47.512 18:56:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2729992 00:05:47.512 18:56:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2729992 00:05:47.512 [2024-11-26 18:56:04.674961] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.771 00:05:47.771 real 0m4.441s 00:05:47.771 user 0m7.859s 00:05:47.771 sys 0m0.376s 00:05:47.771 18:56:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.771 18:56:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.771 ************************************ 00:05:47.771 END TEST event_scheduler 00:05:47.771 ************************************ 00:05:47.771 18:56:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.771 18:56:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.771 18:56:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.771 18:56:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.771 18:56:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.771 ************************************ 00:05:47.771 START TEST app_repeat 00:05:47.771 ************************************ 00:05:47.771 18:56:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2730575 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2730575' 00:05:47.771 Process app_repeat pid: 2730575 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.771 spdk_app_start Round 0 00:05:47.771 18:56:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2730575 /var/tmp/spdk-nbd.sock 00:05:47.771 18:56:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2730575 ']' 00:05:47.771 18:56:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.771 18:56:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.771 18:56:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.771 18:56:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.771 18:56:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.771 [2024-11-26 18:56:04.977633] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:47.771 [2024-11-26 18:56:04.977717] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730575 ] 00:05:48.030 [2024-11-26 18:56:05.051637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.030 [2024-11-26 18:56:05.097081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.030 [2024-11-26 18:56:05.097083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.030 18:56:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.030 18:56:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.030 18:56:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.289 Malloc0 00:05:48.289 18:56:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.547 Malloc1 00:05:48.547 18:56:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.547 18:56:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.806 /dev/nbd0 00:05:48.806 18:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.806 18:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.806 1+0 records in 00:05:48.806 1+0 records out 00:05:48.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211641 s, 19.4 MB/s 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.806 18:56:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.806 18:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.806 18:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.806 18:56:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.064 /dev/nbd1 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.064 1+0 records in 00:05:49.064 1+0 records out 00:05:49.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249013 s, 16.4 MB/s 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.064 18:56:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.064 { 00:05:49.064 "nbd_device": "/dev/nbd0", 00:05:49.064 "bdev_name": "Malloc0" 00:05:49.064 }, 00:05:49.064 { 00:05:49.064 "nbd_device": "/dev/nbd1", 00:05:49.064 "bdev_name": "Malloc1" 00:05:49.064 } 00:05:49.064 ]' 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.064 18:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.064 { 00:05:49.064 "nbd_device": "/dev/nbd0", 00:05:49.064 "bdev_name": "Malloc0" 00:05:49.064 }, 00:05:49.064 { 00:05:49.064 "nbd_device": "/dev/nbd1", 00:05:49.064 "bdev_name": "Malloc1" 00:05:49.064 } 00:05:49.064 ]' 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.322 /dev/nbd1' 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.322 /dev/nbd1' 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.322 18:56:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.323 256+0 records in 00:05:49.323 256+0 records out 00:05:49.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107226 s, 97.8 MB/s 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.323 256+0 records in 00:05:49.323 256+0 records out 00:05:49.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203403 s, 51.6 MB/s 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.323 256+0 records in 00:05:49.323 256+0 records out 00:05:49.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219107 s, 47.9 MB/s 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.323 18:56:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.580 18:56:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.580 18:56:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.580 18:56:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.580 18:56:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.581 18:56:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.581 18:56:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.581 18:56:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.581 18:56:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.581 18:56:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.581 18:56:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.838 18:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.838 18:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.838 18:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.838 18:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.096 18:56:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.096 18:56:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.096 18:56:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.353 [2024-11-26 18:56:07.445085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.353 [2024-11-26 18:56:07.488548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.353 [2024-11-26 18:56:07.488551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.353 [2024-11-26 18:56:07.529188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.353 [2024-11-26 18:56:07.529234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.634 18:56:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.634 18:56:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.634 spdk_app_start Round 1 00:05:53.634 18:56:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2730575 /var/tmp/spdk-nbd.sock 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2730575 ']' 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.634 18:56:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.634 18:56:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.634 Malloc0 00:05:53.634 18:56:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.892 Malloc1 00:05:53.892 18:56:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.892 18:56:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.151 /dev/nbd0 00:05:54.151 18:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.151 18:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.151 1+0 records in 00:05:54.151 1+0 records out 00:05:54.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259315 s, 15.8 MB/s 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.151 18:56:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.151 18:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.151 18:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.151 18:56:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.409 /dev/nbd1 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.409 1+0 records in 00:05:54.409 1+0 records out 00:05:54.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268838 s, 15.2 MB/s 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.409 18:56:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.409 { 00:05:54.409 "nbd_device": "/dev/nbd0", 00:05:54.409 "bdev_name": "Malloc0" 00:05:54.409 }, 00:05:54.409 { 00:05:54.409 "nbd_device": "/dev/nbd1", 00:05:54.409 "bdev_name": "Malloc1" 00:05:54.409 } 00:05:54.409 ]' 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.409 18:56:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.409 { 00:05:54.409 "nbd_device": "/dev/nbd0", 00:05:54.409 "bdev_name": "Malloc0" 00:05:54.409 }, 00:05:54.409 { 00:05:54.409 "nbd_device": "/dev/nbd1", 00:05:54.409 "bdev_name": "Malloc1" 00:05:54.409 } 00:05:54.409 ]' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.668 /dev/nbd1' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.668 /dev/nbd1' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.668 256+0 records in 00:05:54.668 256+0 records out 00:05:54.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050849 s, 206 MB/s 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.668 256+0 records in 00:05:54.668 256+0 records out 00:05:54.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207831 s, 50.5 MB/s 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.668 256+0 records in 00:05:54.668 256+0 records out 00:05:54.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223449 s, 46.9 MB/s 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.668 18:56:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.927 18:56:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.224 18:56:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.225 18:56:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.571 18:56:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.907 [2024-11-26 18:56:12.806320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.907 [2024-11-26 18:56:12.851186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.907 [2024-11-26 18:56:12.851188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.907 [2024-11-26 18:56:12.892848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.907 [2024-11-26 18:56:12.892891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.182 18:56:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.182 18:56:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.182 spdk_app_start Round 2 00:05:59.182 18:56:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2730575 /var/tmp/spdk-nbd.sock 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2730575 ']' 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.182 18:56:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.182 18:56:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.182 Malloc0 00:05:59.182 18:56:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.182 Malloc1 00:05:59.182 18:56:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.182 18:56:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.438 /dev/nbd0 00:05:59.439 18:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.439 18:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.439 1+0 records in 00:05:59.439 1+0 records out 00:05:59.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262882 s, 15.6 MB/s 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.439 18:56:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.439 18:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.439 18:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.439 18:56:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.696 /dev/nbd1 00:05:59.696 18:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.696 18:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.696 1+0 records in 00:05:59.696 1+0 records out 00:05:59.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178227 s, 23.0 MB/s 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.696 18:56:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.696 18:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.696 18:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.697 18:56:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.697 18:56:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.697 18:56:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.954 { 00:05:59.954 "nbd_device": "/dev/nbd0", 00:05:59.954 "bdev_name": "Malloc0" 00:05:59.954 }, 00:05:59.954 { 00:05:59.954 "nbd_device": "/dev/nbd1", 00:05:59.954 "bdev_name": "Malloc1" 00:05:59.954 } 00:05:59.954 ]' 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.954 { 00:05:59.954 "nbd_device": "/dev/nbd0", 00:05:59.954 "bdev_name": "Malloc0" 00:05:59.954 }, 00:05:59.954 { 00:05:59.954 "nbd_device": "/dev/nbd1", 00:05:59.954 "bdev_name": "Malloc1" 00:05:59.954 } 00:05:59.954 ]' 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.954 /dev/nbd1' 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.954 /dev/nbd1' 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.954 256+0 records in 00:05:59.954 256+0 records out 00:05:59.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526523 s, 199 MB/s 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.954 18:56:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.954 256+0 records in 00:05:59.954 256+0 records out 00:05:59.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203188 s, 51.6 MB/s 00:05:59.954 18:56:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.954 18:56:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.955 256+0 records in 00:05:59.955 256+0 records out 00:05:59.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222087 s, 47.2 MB/s 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.955 18:56:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.212 18:56:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.469 18:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.727 18:56:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.727 18:56:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.984 18:56:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.984 [2024-11-26 18:56:18.120287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.984 [2024-11-26 18:56:18.163918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.984 [2024-11-26 18:56:18.163920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.241 [2024-11-26 18:56:18.204685] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.241 [2024-11-26 18:56:18.204726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.773 18:56:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2730575 /var/tmp/spdk-nbd.sock 00:06:03.773 18:56:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2730575 ']' 00:06:03.773 18:56:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.773 18:56:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.773 18:56:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.773 18:56:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.773 18:56:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.032 18:56:21 event.app_repeat -- event/event.sh@39 -- # killprocess 2730575 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2730575 ']' 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2730575 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730575 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730575' 00:06:04.032 killing process with pid 2730575 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2730575 00:06:04.032 18:56:21 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2730575 00:06:04.291 spdk_app_start is called in Round 0. 00:06:04.291 Shutdown signal received, stop current app iteration 00:06:04.291 Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 reinitialization... 00:06:04.291 spdk_app_start is called in Round 1. 00:06:04.291 Shutdown signal received, stop current app iteration 00:06:04.291 Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 reinitialization... 00:06:04.291 spdk_app_start is called in Round 2. 00:06:04.291 Shutdown signal received, stop current app iteration 00:06:04.291 Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 reinitialization... 00:06:04.291 spdk_app_start is called in Round 3. 00:06:04.291 Shutdown signal received, stop current app iteration 00:06:04.291 18:56:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.291 18:56:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.291 00:06:04.291 real 0m16.413s 00:06:04.291 user 0m35.350s 00:06:04.291 sys 0m3.201s 00:06:04.291 18:56:21 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.291 18:56:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.291 ************************************ 00:06:04.291 END TEST app_repeat 00:06:04.291 ************************************ 00:06:04.291 18:56:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.291 18:56:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.291 18:56:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.291 18:56:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.291 18:56:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.291 ************************************ 00:06:04.291 START TEST cpu_locks 00:06:04.291 ************************************ 00:06:04.291 18:56:21 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.551 * Looking for test storage... 00:06:04.551 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.551 18:56:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.551 --rc genhtml_branch_coverage=1 00:06:04.551 --rc genhtml_function_coverage=1 00:06:04.551 --rc genhtml_legend=1 00:06:04.551 --rc geninfo_all_blocks=1 00:06:04.551 --rc geninfo_unexecuted_blocks=1 00:06:04.551 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.551 ' 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.551 --rc genhtml_branch_coverage=1 00:06:04.551 --rc genhtml_function_coverage=1 00:06:04.551 --rc genhtml_legend=1 00:06:04.551 --rc geninfo_all_blocks=1 00:06:04.551 --rc geninfo_unexecuted_blocks=1 00:06:04.551 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.551 ' 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.551 --rc genhtml_branch_coverage=1 00:06:04.551 --rc genhtml_function_coverage=1 00:06:04.551 --rc genhtml_legend=1 00:06:04.551 --rc geninfo_all_blocks=1 00:06:04.551 --rc geninfo_unexecuted_blocks=1 00:06:04.551 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.551 ' 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.551 --rc genhtml_branch_coverage=1 00:06:04.551 --rc genhtml_function_coverage=1 00:06:04.551 --rc genhtml_legend=1 00:06:04.551 --rc geninfo_all_blocks=1 00:06:04.551 --rc geninfo_unexecuted_blocks=1 00:06:04.551 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:04.551 ' 00:06:04.551 18:56:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.551 18:56:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.551 18:56:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.551 18:56:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.551 18:56:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.551 ************************************ 00:06:04.551 START TEST default_locks 00:06:04.551 ************************************ 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2733016 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2733016 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2733016 ']' 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.551 18:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.551 [2024-11-26 18:56:21.705569] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:04.551 [2024-11-26 18:56:21.705639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733016 ] 00:06:04.810 [2024-11-26 18:56:21.778393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.810 [2024-11-26 18:56:21.823485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.068 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.068 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:05.068 18:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2733016 00:06:05.068 18:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2733016 00:06:05.068 18:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.635 lslocks: write error 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2733016 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2733016 ']' 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2733016 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733016 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733016' 00:06:05.635 killing process with pid 2733016 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2733016 00:06:05.635 18:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2733016 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2733016 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2733016 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2733016 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2733016 ']' 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.893 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2733016) - No such process 00:06:05.893 ERROR: process (pid: 2733016) is no longer running 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.893 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.894 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.894 18:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.894 18:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.894 18:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.894 18:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.894 00:06:05.894 real 0m1.366s 00:06:05.894 user 0m1.353s 00:06:05.894 sys 0m0.677s 00:06:05.894 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.894 18:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.894 ************************************ 00:06:05.894 END TEST default_locks 00:06:05.894 ************************************ 00:06:05.894 18:56:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.894 18:56:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.894 18:56:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.894 18:56:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.157 ************************************ 00:06:06.157 START TEST default_locks_via_rpc 00:06:06.157 ************************************ 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2733292 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2733292 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2733292 ']' 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.157 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.157 [2024-11-26 18:56:23.148919] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:06.157 [2024-11-26 18:56:23.148991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733292 ] 00:06:06.157 [2024-11-26 18:56:23.219891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.157 [2024-11-26 18:56:23.266856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.415 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.415 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.415 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.415 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.415 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.415 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.415 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2733292 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.416 18:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2733292 00:06:06.980 18:56:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2733292 00:06:06.980 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2733292 ']' 00:06:06.980 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2733292 00:06:06.980 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:07.238 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.238 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733292 00:06:07.238 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.238 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.238 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733292' 00:06:07.238 killing process with pid 2733292 00:06:07.238 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2733292 00:06:07.238 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2733292 00:06:07.497 00:06:07.497 real 0m1.420s 00:06:07.497 user 0m1.412s 00:06:07.497 sys 0m0.694s 00:06:07.497 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.497 18:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.497 ************************************ 00:06:07.497 END TEST default_locks_via_rpc 00:06:07.497 ************************************ 00:06:07.497 18:56:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.497 18:56:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.497 18:56:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.497 18:56:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.497 ************************************ 00:06:07.497 START TEST non_locking_app_on_locked_coremask 00:06:07.497 ************************************ 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2733499 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2733499 /var/tmp/spdk.sock 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2733499 ']' 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.497 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.497 [2024-11-26 18:56:24.640023] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:07.497 [2024-11-26 18:56:24.640103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733499 ] 00:06:07.755 [2024-11-26 18:56:24.711073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.755 [2024-11-26 18:56:24.758803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.755 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.755 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2733507 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2733507 /var/tmp/spdk2.sock 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2733507 ']' 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.013 18:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.013 [2024-11-26 18:56:24.991413] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:08.013 [2024-11-26 18:56:24.991486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733507 ] 00:06:08.013 [2024-11-26 18:56:25.085303] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.013 [2024-11-26 18:56:25.085337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.013 [2024-11-26 18:56:25.186636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.948 18:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.948 18:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.948 18:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2733499 00:06:08.948 18:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2733499 00:06:08.948 18:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.319 lslocks: write error 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2733499 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2733499 ']' 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2733499 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733499 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733499' 00:06:10.319 killing process with pid 2733499 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2733499 00:06:10.319 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2733499 00:06:10.885 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2733507 00:06:10.885 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2733507 ']' 00:06:10.885 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2733507 00:06:10.885 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.886 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.886 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733507 00:06:10.886 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.886 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.886 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733507' 00:06:10.886 killing process with pid 2733507 00:06:10.886 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2733507 00:06:10.886 18:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2733507 00:06:11.144 00:06:11.144 real 0m3.540s 00:06:11.144 user 0m3.719s 00:06:11.144 sys 0m1.326s 00:06:11.144 18:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.144 18:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.144 ************************************ 00:06:11.144 END TEST non_locking_app_on_locked_coremask 00:06:11.144 ************************************ 00:06:11.144 18:56:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.144 18:56:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.144 18:56:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.144 18:56:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.144 ************************************ 00:06:11.144 START TEST locking_app_on_unlocked_coremask 00:06:11.144 ************************************ 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2733941 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2733941 /var/tmp/spdk.sock 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2733941 ']' 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.144 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.144 [2024-11-26 18:56:28.258850] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:11.144 [2024-11-26 18:56:28.258912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733941 ] 00:06:11.144 [2024-11-26 18:56:28.331477] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.144 [2024-11-26 18:56:28.331510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.402 [2024-11-26 18:56:28.380576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2734072 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2734072 /var/tmp/spdk2.sock 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2734072 ']' 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.402 18:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.661 [2024-11-26 18:56:28.620109] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:11.661 [2024-11-26 18:56:28.620178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734072 ] 00:06:11.661 [2024-11-26 18:56:28.716819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.661 [2024-11-26 18:56:28.805092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.595 18:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.595 18:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.595 18:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2734072 00:06:12.595 18:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2734072 00:06:12.595 18:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.530 lslocks: write error 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2733941 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2733941 ']' 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2733941 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733941 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733941' 00:06:13.530 killing process with pid 2733941 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2733941 00:06:13.530 18:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2733941 00:06:14.097 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2734072 00:06:14.097 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2734072 ']' 00:06:14.097 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2734072 00:06:14.097 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.097 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.097 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2734072 00:06:14.355 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.355 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.355 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2734072' 00:06:14.355 killing process with pid 2734072 00:06:14.355 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2734072 00:06:14.355 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2734072 00:06:14.614 00:06:14.614 real 0m3.384s 00:06:14.614 user 0m3.569s 00:06:14.614 sys 0m1.268s 00:06:14.614 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.614 18:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.614 ************************************ 00:06:14.614 END TEST locking_app_on_unlocked_coremask 00:06:14.614 ************************************ 00:06:14.614 18:56:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:14.614 18:56:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.614 18:56:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.614 18:56:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.614 ************************************ 00:06:14.614 START TEST locking_app_on_locked_coremask 00:06:14.614 ************************************ 00:06:14.614 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:14.614 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2734463 00:06:14.614 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2734463 /var/tmp/spdk.sock 00:06:14.614 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2734463 ']' 00:06:14.614 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.615 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.615 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.615 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.615 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.615 18:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.615 [2024-11-26 18:56:31.704200] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:14.615 [2024-11-26 18:56:31.704241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734463 ] 00:06:14.615 [2024-11-26 18:56:31.768148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.615 [2024-11-26 18:56:31.817418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.872 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.872 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.872 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2734471 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2734471 /var/tmp/spdk2.sock 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2734471 /var/tmp/spdk2.sock 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2734471 /var/tmp/spdk2.sock 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2734471 ']' 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.873 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.873 [2024-11-26 18:56:32.053320] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:14.873 [2024-11-26 18:56:32.053411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734471 ] 00:06:15.130 [2024-11-26 18:56:32.149820] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2734463 has claimed it. 00:06:15.130 [2024-11-26 18:56:32.149854] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.695 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2734471) - No such process 00:06:15.695 ERROR: process (pid: 2734471) is no longer running 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2734463 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2734463 00:06:15.695 18:56:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.953 lslocks: write error 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2734463 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2734463 ']' 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2734463 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2734463 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2734463' 00:06:15.953 killing process with pid 2734463 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2734463 00:06:15.953 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2734463 00:06:16.519 00:06:16.519 real 0m1.747s 00:06:16.519 user 0m1.881s 00:06:16.519 sys 0m0.633s 00:06:16.519 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.519 18:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.519 ************************************ 00:06:16.519 END TEST locking_app_on_locked_coremask 00:06:16.519 ************************************ 00:06:16.519 18:56:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.519 18:56:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.519 18:56:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.519 18:56:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.519 ************************************ 00:06:16.519 START TEST locking_overlapped_coremask 00:06:16.519 ************************************ 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2734680 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2734680 /var/tmp/spdk.sock 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2734680 ']' 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.519 18:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.519 [2024-11-26 18:56:33.539093] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:16.519 [2024-11-26 18:56:33.539152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734680 ] 00:06:16.519 [2024-11-26 18:56:33.608595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.519 [2024-11-26 18:56:33.659530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.519 [2024-11-26 18:56:33.659616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.519 [2024-11-26 18:56:33.659618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2734844 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2734844 /var/tmp/spdk2.sock 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2734844 /var/tmp/spdk2.sock 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2734844 /var/tmp/spdk2.sock 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2734844 ']' 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.777 18:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.777 [2024-11-26 18:56:33.912377] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:16.777 [2024-11-26 18:56:33.912461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734844 ] 00:06:17.035 [2024-11-26 18:56:34.010068] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2734680 has claimed it. 00:06:17.035 [2024-11-26 18:56:34.010107] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.600 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2734844) - No such process 00:06:17.600 ERROR: process (pid: 2734844) is no longer running 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2734680 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2734680 ']' 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2734680 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2734680 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.600 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2734680' 00:06:17.601 killing process with pid 2734680 00:06:17.601 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2734680 00:06:17.601 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2734680 00:06:17.859 00:06:17.859 real 0m1.424s 00:06:17.859 user 0m3.944s 00:06:17.859 sys 0m0.413s 00:06:17.859 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.859 18:56:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.859 ************************************ 00:06:17.859 END TEST locking_overlapped_coremask 00:06:17.859 ************************************ 00:06:17.859 18:56:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:17.859 18:56:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.859 18:56:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.859 18:56:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.859 ************************************ 00:06:17.859 START TEST locking_overlapped_coremask_via_rpc 00:06:17.859 ************************************ 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2734972 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2734972 /var/tmp/spdk.sock 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2734972 ']' 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.859 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.859 [2024-11-26 18:56:35.039874] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:17.859 [2024-11-26 18:56:35.039930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734972 ] 00:06:18.117 [2024-11-26 18:56:35.111111] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.117 [2024-11-26 18:56:35.111142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.117 [2024-11-26 18:56:35.162373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.117 [2024-11-26 18:56:35.162459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.117 [2024-11-26 18:56:35.162461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2735057 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2735057 /var/tmp/spdk2.sock 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2735057 ']' 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.376 18:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.376 [2024-11-26 18:56:35.401852] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:18.376 [2024-11-26 18:56:35.401924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735057 ] 00:06:18.376 [2024-11-26 18:56:35.498175] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.376 [2024-11-26 18:56:35.498207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.634 [2024-11-26 18:56:35.600634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.634 [2024-11-26 18:56:35.600741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.634 [2024-11-26 18:56:35.600743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.200 [2024-11-26 18:56:36.269541] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2734972 has claimed it. 00:06:19.200 request: 00:06:19.200 { 00:06:19.200 "method": "framework_enable_cpumask_locks", 00:06:19.200 "req_id": 1 00:06:19.200 } 00:06:19.200 Got JSON-RPC error response 00:06:19.200 response: 00:06:19.200 { 00:06:19.200 "code": -32603, 00:06:19.200 "message": "Failed to claim CPU core: 2" 00:06:19.200 } 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2734972 /var/tmp/spdk.sock 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2734972 ']' 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.200 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2735057 /var/tmp/spdk2.sock 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2735057 ']' 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.458 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.716 00:06:19.716 real 0m1.690s 00:06:19.716 user 0m0.800s 00:06:19.716 sys 0m0.174s 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.716 18:56:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.716 ************************************ 00:06:19.716 END TEST locking_overlapped_coremask_via_rpc 00:06:19.716 ************************************ 00:06:19.716 18:56:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:19.716 18:56:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2734972 ]] 00:06:19.716 18:56:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2734972 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2734972 ']' 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2734972 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2734972 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2734972' 00:06:19.716 killing process with pid 2734972 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2734972 00:06:19.716 18:56:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2734972 00:06:19.974 18:56:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2735057 ]] 00:06:19.974 18:56:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2735057 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2735057 ']' 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2735057 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2735057 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2735057' 00:06:19.974 killing process with pid 2735057 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2735057 00:06:19.974 18:56:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2735057 00:06:20.541 18:56:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.541 18:56:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.541 18:56:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2734972 ]] 00:06:20.541 18:56:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2734972 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2734972 ']' 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2734972 00:06:20.541 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2734972) - No such process 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2734972 is not found' 00:06:20.541 Process with pid 2734972 is not found 00:06:20.541 18:56:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2735057 ]] 00:06:20.541 18:56:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2735057 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2735057 ']' 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2735057 00:06:20.541 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2735057) - No such process 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2735057 is not found' 00:06:20.541 Process with pid 2735057 is not found 00:06:20.541 18:56:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.541 00:06:20.541 real 0m16.057s 00:06:20.541 user 0m26.549s 00:06:20.541 sys 0m6.236s 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.541 18:56:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 ************************************ 00:06:20.541 END TEST cpu_locks 00:06:20.541 ************************************ 00:06:20.541 00:06:20.541 real 0m41.056s 00:06:20.541 user 1m16.285s 00:06:20.541 sys 0m10.468s 00:06:20.541 18:56:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.541 18:56:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 ************************************ 00:06:20.541 END TEST event 00:06:20.541 ************************************ 00:06:20.541 18:56:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:20.541 18:56:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.541 18:56:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.541 18:56:37 -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 ************************************ 00:06:20.541 START TEST thread 00:06:20.541 ************************************ 00:06:20.541 18:56:37 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:20.541 * Looking for test storage... 00:06:20.541 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:20.541 18:56:37 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.541 18:56:37 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.541 18:56:37 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.800 18:56:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.800 18:56:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.800 18:56:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.800 18:56:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.800 18:56:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.800 18:56:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.800 18:56:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.800 18:56:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.800 18:56:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.800 18:56:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.800 18:56:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.800 18:56:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:20.800 18:56:37 thread -- scripts/common.sh@345 -- # : 1 00:06:20.800 18:56:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.800 18:56:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.800 18:56:37 thread -- scripts/common.sh@365 -- # decimal 1 00:06:20.800 18:56:37 thread -- scripts/common.sh@353 -- # local d=1 00:06:20.800 18:56:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.800 18:56:37 thread -- scripts/common.sh@355 -- # echo 1 00:06:20.800 18:56:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.800 18:56:37 thread -- scripts/common.sh@366 -- # decimal 2 00:06:20.800 18:56:37 thread -- scripts/common.sh@353 -- # local d=2 00:06:20.800 18:56:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.800 18:56:37 thread -- scripts/common.sh@355 -- # echo 2 00:06:20.800 18:56:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.800 18:56:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.800 18:56:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.800 18:56:37 thread -- scripts/common.sh@368 -- # return 0 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.800 --rc genhtml_branch_coverage=1 00:06:20.800 --rc genhtml_function_coverage=1 00:06:20.800 --rc genhtml_legend=1 00:06:20.800 --rc geninfo_all_blocks=1 00:06:20.800 --rc geninfo_unexecuted_blocks=1 00:06:20.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.800 ' 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.800 --rc genhtml_branch_coverage=1 00:06:20.800 --rc genhtml_function_coverage=1 00:06:20.800 --rc genhtml_legend=1 00:06:20.800 --rc geninfo_all_blocks=1 00:06:20.800 --rc geninfo_unexecuted_blocks=1 00:06:20.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.800 ' 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.800 --rc genhtml_branch_coverage=1 00:06:20.800 --rc genhtml_function_coverage=1 00:06:20.800 --rc genhtml_legend=1 00:06:20.800 --rc geninfo_all_blocks=1 00:06:20.800 --rc geninfo_unexecuted_blocks=1 00:06:20.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.800 ' 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.800 --rc genhtml_branch_coverage=1 00:06:20.800 --rc genhtml_function_coverage=1 00:06:20.800 --rc genhtml_legend=1 00:06:20.800 --rc geninfo_all_blocks=1 00:06:20.800 --rc geninfo_unexecuted_blocks=1 00:06:20.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.800 ' 00:06:20.800 18:56:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.800 18:56:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.800 ************************************ 00:06:20.800 START TEST thread_poller_perf 00:06:20.800 ************************************ 00:06:20.800 18:56:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.800 [2024-11-26 18:56:37.859639] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:20.800 [2024-11-26 18:56:37.859739] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735515 ] 00:06:20.800 [2024-11-26 18:56:37.934300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.800 [2024-11-26 18:56:37.978009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.800 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.175 [2024-11-26T17:56:39.389Z] ====================================== 00:06:22.175 [2024-11-26T17:56:39.389Z] busy:2304766226 (cyc) 00:06:22.175 [2024-11-26T17:56:39.389Z] total_run_count: 832000 00:06:22.175 [2024-11-26T17:56:39.389Z] tsc_hz: 2300000000 (cyc) 00:06:22.175 [2024-11-26T17:56:39.389Z] ====================================== 00:06:22.175 [2024-11-26T17:56:39.389Z] poller_cost: 2770 (cyc), 1204 (nsec) 00:06:22.175 00:06:22.175 real 0m1.181s 00:06:22.175 user 0m1.097s 00:06:22.175 sys 0m0.081s 00:06:22.175 18:56:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.175 18:56:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.175 ************************************ 00:06:22.175 END TEST thread_poller_perf 00:06:22.175 ************************************ 00:06:22.175 18:56:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.175 18:56:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:22.175 18:56:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.175 18:56:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.175 ************************************ 00:06:22.175 START TEST thread_poller_perf 00:06:22.175 ************************************ 00:06:22.175 18:56:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.175 [2024-11-26 18:56:39.125544] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:22.175 [2024-11-26 18:56:39.125625] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735682 ] 00:06:22.175 [2024-11-26 18:56:39.199823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.175 [2024-11-26 18:56:39.243813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.175 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:23.107 [2024-11-26T17:56:40.321Z] ====================================== 00:06:23.107 [2024-11-26T17:56:40.321Z] busy:2301208550 (cyc) 00:06:23.107 [2024-11-26T17:56:40.321Z] total_run_count: 12831000 00:06:23.107 [2024-11-26T17:56:40.321Z] tsc_hz: 2300000000 (cyc) 00:06:23.107 [2024-11-26T17:56:40.321Z] ====================================== 00:06:23.107 [2024-11-26T17:56:40.321Z] poller_cost: 179 (cyc), 77 (nsec) 00:06:23.107 00:06:23.107 real 0m1.180s 00:06:23.107 user 0m1.093s 00:06:23.107 sys 0m0.083s 00:06:23.107 18:56:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.107 18:56:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.107 ************************************ 00:06:23.107 END TEST thread_poller_perf 00:06:23.107 ************************************ 00:06:23.365 18:56:40 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:23.365 18:56:40 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:23.365 18:56:40 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.365 18:56:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.365 18:56:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.365 ************************************ 00:06:23.365 START TEST thread_spdk_lock 00:06:23.366 ************************************ 00:06:23.366 18:56:40 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:23.366 [2024-11-26 18:56:40.388090] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:23.366 [2024-11-26 18:56:40.388171] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735836 ] 00:06:23.366 [2024-11-26 18:56:40.464465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.366 [2024-11-26 18:56:40.511495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.366 [2024-11-26 18:56:40.511498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.932 [2024-11-26 18:56:41.004277] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:23.932 [2024-11-26 18:56:41.004310] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:23.932 [2024-11-26 18:56:41.004320] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14dbbc0 00:06:23.932 [2024-11-26 18:56:41.005043] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:23.932 [2024-11-26 18:56:41.005149] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:23.932 [2024-11-26 18:56:41.005165] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:23.932 Starting test contend 00:06:23.932 Worker Delay Wait us Hold us Total us 00:06:23.932 0 3 170690 186889 357579 00:06:23.932 1 5 89127 286021 375148 00:06:23.932 PASS test contend 00:06:23.932 Starting test hold_by_poller 00:06:23.932 PASS test hold_by_poller 00:06:23.932 Starting test hold_by_message 00:06:23.932 PASS test hold_by_message 00:06:23.932 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:23.932 100014 assertions passed 00:06:23.932 0 assertions failed 00:06:23.932 00:06:23.932 real 0m0.673s 00:06:23.932 user 0m1.072s 00:06:23.932 sys 0m0.092s 00:06:23.932 18:56:41 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.932 18:56:41 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:23.932 ************************************ 00:06:23.932 END TEST thread_spdk_lock 00:06:23.932 ************************************ 00:06:23.932 00:06:23.932 real 0m3.472s 00:06:23.932 user 0m3.456s 00:06:23.932 sys 0m0.537s 00:06:23.932 18:56:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.932 18:56:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.932 ************************************ 00:06:23.932 END TEST thread 00:06:23.932 ************************************ 00:06:23.932 18:56:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:23.932 18:56:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:23.932 18:56:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.932 18:56:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.932 18:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.191 ************************************ 00:06:24.191 START TEST app_cmdline 00:06:24.191 ************************************ 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:24.191 * Looking for test storage... 00:06:24.191 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.191 18:56:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.191 --rc genhtml_branch_coverage=1 00:06:24.191 --rc genhtml_function_coverage=1 00:06:24.191 --rc genhtml_legend=1 00:06:24.191 --rc geninfo_all_blocks=1 00:06:24.191 --rc geninfo_unexecuted_blocks=1 00:06:24.191 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.191 ' 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.191 --rc genhtml_branch_coverage=1 00:06:24.191 --rc genhtml_function_coverage=1 00:06:24.191 --rc genhtml_legend=1 00:06:24.191 --rc geninfo_all_blocks=1 00:06:24.191 --rc geninfo_unexecuted_blocks=1 00:06:24.191 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.191 ' 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.191 --rc genhtml_branch_coverage=1 00:06:24.191 --rc genhtml_function_coverage=1 00:06:24.191 --rc genhtml_legend=1 00:06:24.191 --rc geninfo_all_blocks=1 00:06:24.191 --rc geninfo_unexecuted_blocks=1 00:06:24.191 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.191 ' 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.191 --rc genhtml_branch_coverage=1 00:06:24.191 --rc genhtml_function_coverage=1 00:06:24.191 --rc genhtml_legend=1 00:06:24.191 --rc geninfo_all_blocks=1 00:06:24.191 --rc geninfo_unexecuted_blocks=1 00:06:24.191 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:24.191 ' 00:06:24.191 18:56:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:24.191 18:56:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2735986 00:06:24.191 18:56:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:24.191 18:56:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2735986 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2735986 ']' 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.191 18:56:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.191 [2024-11-26 18:56:41.365806] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:24.191 [2024-11-26 18:56:41.365871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735986 ] 00:06:24.450 [2024-11-26 18:56:41.435070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.450 [2024-11-26 18:56:41.483296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.708 18:56:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.708 18:56:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:24.708 { 00:06:24.708 "version": "SPDK v25.01-pre git sha1 afdec00e1", 00:06:24.708 "fields": { 00:06:24.708 "major": 25, 00:06:24.708 "minor": 1, 00:06:24.708 "patch": 0, 00:06:24.708 "suffix": "-pre", 00:06:24.708 "commit": "afdec00e1" 00:06:24.708 } 00:06:24.708 } 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:24.708 18:56:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.708 18:56:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.708 18:56:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:24.708 18:56:41 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.966 18:56:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:24.966 18:56:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:24.966 18:56:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:24.966 18:56:41 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.966 request: 00:06:24.966 { 00:06:24.966 "method": "env_dpdk_get_mem_stats", 00:06:24.966 "req_id": 1 00:06:24.966 } 00:06:24.966 Got JSON-RPC error response 00:06:24.966 response: 00:06:24.966 { 00:06:24.966 "code": -32601, 00:06:24.966 "message": "Method not found" 00:06:24.966 } 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.966 18:56:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2735986 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2735986 ']' 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2735986 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.966 18:56:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2735986 00:06:25.226 18:56:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.226 18:56:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.226 18:56:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2735986' 00:06:25.226 killing process with pid 2735986 00:06:25.226 18:56:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 2735986 00:06:25.226 18:56:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 2735986 00:06:25.485 00:06:25.485 real 0m1.341s 00:06:25.485 user 0m1.527s 00:06:25.485 sys 0m0.484s 00:06:25.485 18:56:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.485 18:56:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.485 ************************************ 00:06:25.485 END TEST app_cmdline 00:06:25.485 ************************************ 00:06:25.485 18:56:42 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:25.485 18:56:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.485 18:56:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.485 18:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:25.485 ************************************ 00:06:25.485 START TEST version 00:06:25.485 ************************************ 00:06:25.485 18:56:42 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:25.485 * Looking for test storage... 00:06:25.485 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:25.485 18:56:42 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.485 18:56:42 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.485 18:56:42 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.743 18:56:42 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.743 18:56:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.743 18:56:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.743 18:56:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.743 18:56:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.743 18:56:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.743 18:56:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.743 18:56:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.743 18:56:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.743 18:56:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.743 18:56:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.743 18:56:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.744 18:56:42 version -- scripts/common.sh@344 -- # case "$op" in 00:06:25.744 18:56:42 version -- scripts/common.sh@345 -- # : 1 00:06:25.744 18:56:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.744 18:56:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.744 18:56:42 version -- scripts/common.sh@365 -- # decimal 1 00:06:25.744 18:56:42 version -- scripts/common.sh@353 -- # local d=1 00:06:25.744 18:56:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.744 18:56:42 version -- scripts/common.sh@355 -- # echo 1 00:06:25.744 18:56:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.744 18:56:42 version -- scripts/common.sh@366 -- # decimal 2 00:06:25.744 18:56:42 version -- scripts/common.sh@353 -- # local d=2 00:06:25.744 18:56:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.744 18:56:42 version -- scripts/common.sh@355 -- # echo 2 00:06:25.744 18:56:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.744 18:56:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.744 18:56:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.744 18:56:42 version -- scripts/common.sh@368 -- # return 0 00:06:25.744 18:56:42 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.744 18:56:42 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.744 --rc genhtml_branch_coverage=1 00:06:25.744 --rc genhtml_function_coverage=1 00:06:25.744 --rc genhtml_legend=1 00:06:25.744 --rc geninfo_all_blocks=1 00:06:25.744 --rc geninfo_unexecuted_blocks=1 00:06:25.744 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.744 ' 00:06:25.744 18:56:42 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.744 --rc genhtml_branch_coverage=1 00:06:25.744 --rc genhtml_function_coverage=1 00:06:25.744 --rc genhtml_legend=1 00:06:25.744 --rc geninfo_all_blocks=1 00:06:25.744 --rc geninfo_unexecuted_blocks=1 00:06:25.744 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.744 ' 00:06:25.744 18:56:42 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.744 --rc genhtml_branch_coverage=1 00:06:25.744 --rc genhtml_function_coverage=1 00:06:25.744 --rc genhtml_legend=1 00:06:25.744 --rc geninfo_all_blocks=1 00:06:25.744 --rc geninfo_unexecuted_blocks=1 00:06:25.744 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.744 ' 00:06:25.744 18:56:42 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.744 --rc genhtml_branch_coverage=1 00:06:25.744 --rc genhtml_function_coverage=1 00:06:25.744 --rc genhtml_legend=1 00:06:25.744 --rc geninfo_all_blocks=1 00:06:25.744 --rc geninfo_unexecuted_blocks=1 00:06:25.744 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.744 ' 00:06:25.744 18:56:42 version -- app/version.sh@17 -- # get_header_version major 00:06:25.744 18:56:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # cut -f2 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.744 18:56:42 version -- app/version.sh@17 -- # major=25 00:06:25.744 18:56:42 version -- app/version.sh@18 -- # get_header_version minor 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # cut -f2 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.744 18:56:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.744 18:56:42 version -- app/version.sh@18 -- # minor=1 00:06:25.744 18:56:42 version -- app/version.sh@19 -- # get_header_version patch 00:06:25.744 18:56:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # cut -f2 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.744 18:56:42 version -- app/version.sh@19 -- # patch=0 00:06:25.744 18:56:42 version -- app/version.sh@20 -- # get_header_version suffix 00:06:25.744 18:56:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # cut -f2 00:06:25.744 18:56:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.744 18:56:42 version -- app/version.sh@20 -- # suffix=-pre 00:06:25.744 18:56:42 version -- app/version.sh@22 -- # version=25.1 00:06:25.744 18:56:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:25.744 18:56:42 version -- app/version.sh@28 -- # version=25.1rc0 00:06:25.744 18:56:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:25.744 18:56:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:25.744 18:56:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:25.744 18:56:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:25.744 00:06:25.744 real 0m0.260s 00:06:25.744 user 0m0.152s 00:06:25.744 sys 0m0.156s 00:06:25.744 18:56:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.744 18:56:42 version -- common/autotest_common.sh@10 -- # set +x 00:06:25.744 ************************************ 00:06:25.744 END TEST version 00:06:25.744 ************************************ 00:06:25.744 18:56:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:25.744 18:56:42 -- spdk/autotest.sh@194 -- # uname -s 00:06:25.744 18:56:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:25.744 18:56:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.744 18:56:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.744 18:56:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:25.744 18:56:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.744 18:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:25.744 18:56:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:06:25.744 18:56:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:25.744 18:56:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:06:25.744 18:56:42 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:06:25.744 18:56:42 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:25.744 18:56:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.744 18:56:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.744 18:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.003 ************************************ 00:06:26.003 START TEST llvm_fuzz 00:06:26.003 ************************************ 00:06:26.003 18:56:42 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:26.003 * Looking for test storage... 00:06:26.003 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.003 18:56:43 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.003 --rc genhtml_branch_coverage=1 00:06:26.003 --rc genhtml_function_coverage=1 00:06:26.003 --rc genhtml_legend=1 00:06:26.003 --rc geninfo_all_blocks=1 00:06:26.003 --rc geninfo_unexecuted_blocks=1 00:06:26.003 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.003 ' 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.003 --rc genhtml_branch_coverage=1 00:06:26.003 --rc genhtml_function_coverage=1 00:06:26.003 --rc genhtml_legend=1 00:06:26.003 --rc geninfo_all_blocks=1 00:06:26.003 --rc geninfo_unexecuted_blocks=1 00:06:26.003 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.003 ' 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.003 --rc genhtml_branch_coverage=1 00:06:26.003 --rc genhtml_function_coverage=1 00:06:26.003 --rc genhtml_legend=1 00:06:26.003 --rc geninfo_all_blocks=1 00:06:26.003 --rc geninfo_unexecuted_blocks=1 00:06:26.003 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.003 ' 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.003 --rc genhtml_branch_coverage=1 00:06:26.003 --rc genhtml_function_coverage=1 00:06:26.003 --rc genhtml_legend=1 00:06:26.003 --rc geninfo_all_blocks=1 00:06:26.003 --rc geninfo_unexecuted_blocks=1 00:06:26.003 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.003 ' 00:06:26.003 18:56:43 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:26.003 18:56:43 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:26.003 18:56:43 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:26.003 18:56:43 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:26.003 18:56:43 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:26.003 18:56:43 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:26.003 18:56:43 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:26.004 18:56:43 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:26.004 18:56:43 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:26.004 18:56:43 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:26.004 18:56:43 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:26.004 18:56:43 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:26.004 18:56:43 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.004 18:56:43 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.004 18:56:43 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:26.004 ************************************ 00:06:26.004 START TEST nvmf_llvm_fuzz 00:06:26.004 ************************************ 00:06:26.004 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:26.267 * Looking for test storage... 00:06:26.267 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.267 --rc genhtml_branch_coverage=1 00:06:26.267 --rc genhtml_function_coverage=1 00:06:26.267 --rc genhtml_legend=1 00:06:26.267 --rc geninfo_all_blocks=1 00:06:26.267 --rc geninfo_unexecuted_blocks=1 00:06:26.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.267 ' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.267 --rc genhtml_branch_coverage=1 00:06:26.267 --rc genhtml_function_coverage=1 00:06:26.267 --rc genhtml_legend=1 00:06:26.267 --rc geninfo_all_blocks=1 00:06:26.267 --rc geninfo_unexecuted_blocks=1 00:06:26.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.267 ' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.267 --rc genhtml_branch_coverage=1 00:06:26.267 --rc genhtml_function_coverage=1 00:06:26.267 --rc genhtml_legend=1 00:06:26.267 --rc geninfo_all_blocks=1 00:06:26.267 --rc geninfo_unexecuted_blocks=1 00:06:26.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.267 ' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.267 --rc genhtml_branch_coverage=1 00:06:26.267 --rc genhtml_function_coverage=1 00:06:26.267 --rc genhtml_legend=1 00:06:26.267 --rc geninfo_all_blocks=1 00:06:26.267 --rc geninfo_unexecuted_blocks=1 00:06:26.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.267 ' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:26.267 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:26.268 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:26.268 #define SPDK_CONFIG_H 00:06:26.268 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:26.268 #define SPDK_CONFIG_APPS 1 00:06:26.268 #define SPDK_CONFIG_ARCH native 00:06:26.268 #undef SPDK_CONFIG_ASAN 00:06:26.268 #undef SPDK_CONFIG_AVAHI 00:06:26.268 #undef SPDK_CONFIG_CET 00:06:26.268 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:26.268 #define SPDK_CONFIG_COVERAGE 1 00:06:26.268 #define SPDK_CONFIG_CROSS_PREFIX 00:06:26.268 #undef SPDK_CONFIG_CRYPTO 00:06:26.268 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:26.268 #undef SPDK_CONFIG_CUSTOMOCF 00:06:26.268 #undef SPDK_CONFIG_DAOS 00:06:26.268 #define SPDK_CONFIG_DAOS_DIR 00:06:26.268 #define SPDK_CONFIG_DEBUG 1 00:06:26.268 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:26.268 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:26.268 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:26.268 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:26.268 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:26.268 #undef SPDK_CONFIG_DPDK_UADK 00:06:26.268 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:26.268 #define SPDK_CONFIG_EXAMPLES 1 00:06:26.268 #undef SPDK_CONFIG_FC 00:06:26.268 #define SPDK_CONFIG_FC_PATH 00:06:26.268 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:26.268 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:26.268 #define SPDK_CONFIG_FSDEV 1 00:06:26.269 #undef SPDK_CONFIG_FUSE 00:06:26.269 #define SPDK_CONFIG_FUZZER 1 00:06:26.269 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:26.269 #undef SPDK_CONFIG_GOLANG 00:06:26.269 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:26.269 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:26.269 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:26.269 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:26.269 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:26.269 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:26.269 #undef SPDK_CONFIG_HAVE_LZ4 00:06:26.269 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:26.269 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:26.269 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:26.269 #define SPDK_CONFIG_IDXD 1 00:06:26.269 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:26.269 #undef SPDK_CONFIG_IPSEC_MB 00:06:26.269 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:26.269 #define SPDK_CONFIG_ISAL 1 00:06:26.269 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:26.269 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:26.269 #define SPDK_CONFIG_LIBDIR 00:06:26.269 #undef SPDK_CONFIG_LTO 00:06:26.269 #define SPDK_CONFIG_MAX_LCORES 128 00:06:26.269 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:26.269 #define SPDK_CONFIG_NVME_CUSE 1 00:06:26.269 #undef SPDK_CONFIG_OCF 00:06:26.269 #define SPDK_CONFIG_OCF_PATH 00:06:26.269 #define SPDK_CONFIG_OPENSSL_PATH 00:06:26.269 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:26.269 #define SPDK_CONFIG_PGO_DIR 00:06:26.269 #undef SPDK_CONFIG_PGO_USE 00:06:26.269 #define SPDK_CONFIG_PREFIX /usr/local 00:06:26.269 #undef SPDK_CONFIG_RAID5F 00:06:26.269 #undef SPDK_CONFIG_RBD 00:06:26.269 #define SPDK_CONFIG_RDMA 1 00:06:26.269 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:26.269 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:26.269 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:26.269 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:26.269 #undef SPDK_CONFIG_SHARED 00:06:26.269 #undef SPDK_CONFIG_SMA 00:06:26.269 #define SPDK_CONFIG_TESTS 1 00:06:26.269 #undef SPDK_CONFIG_TSAN 00:06:26.269 #define SPDK_CONFIG_UBLK 1 00:06:26.269 #define SPDK_CONFIG_UBSAN 1 00:06:26.269 #undef SPDK_CONFIG_UNIT_TESTS 00:06:26.269 #undef SPDK_CONFIG_URING 00:06:26.269 #define SPDK_CONFIG_URING_PATH 00:06:26.269 #undef SPDK_CONFIG_URING_ZNS 00:06:26.269 #undef SPDK_CONFIG_USDT 00:06:26.269 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:26.269 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:26.269 #define SPDK_CONFIG_VFIO_USER 1 00:06:26.269 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:26.269 #define SPDK_CONFIG_VHOST 1 00:06:26.269 #define SPDK_CONFIG_VIRTIO 1 00:06:26.269 #undef SPDK_CONFIG_VTUNE 00:06:26.269 #define SPDK_CONFIG_VTUNE_DIR 00:06:26.269 #define SPDK_CONFIG_WERROR 1 00:06:26.269 #define SPDK_CONFIG_WPDK_DIR 00:06:26.269 #undef SPDK_CONFIG_XNVME 00:06:26.269 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:26.269 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:26.270 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:26.271 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 2736495 ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 2736495 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.87FgQV 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.87FgQV/tests/nvmf /tmp/spdk.87FgQV 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=86644334592 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=94500356096 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7856021504 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47245414400 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250178048 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=18894340096 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=18900074496 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5734400 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47249727488 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250178048 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=450560 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=9450020864 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=9450033152 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:26.532 * Looking for test storage... 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=86644334592 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=10070614016 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.532 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.532 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.533 --rc genhtml_branch_coverage=1 00:06:26.533 --rc genhtml_function_coverage=1 00:06:26.533 --rc genhtml_legend=1 00:06:26.533 --rc geninfo_all_blocks=1 00:06:26.533 --rc geninfo_unexecuted_blocks=1 00:06:26.533 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.533 ' 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.533 --rc genhtml_branch_coverage=1 00:06:26.533 --rc genhtml_function_coverage=1 00:06:26.533 --rc genhtml_legend=1 00:06:26.533 --rc geninfo_all_blocks=1 00:06:26.533 --rc geninfo_unexecuted_blocks=1 00:06:26.533 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.533 ' 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.533 --rc genhtml_branch_coverage=1 00:06:26.533 --rc genhtml_function_coverage=1 00:06:26.533 --rc genhtml_legend=1 00:06:26.533 --rc geninfo_all_blocks=1 00:06:26.533 --rc geninfo_unexecuted_blocks=1 00:06:26.533 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.533 ' 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.533 --rc genhtml_branch_coverage=1 00:06:26.533 --rc genhtml_function_coverage=1 00:06:26.533 --rc genhtml_legend=1 00:06:26.533 --rc geninfo_all_blocks=1 00:06:26.533 --rc geninfo_unexecuted_blocks=1 00:06:26.533 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.533 ' 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.533 18:56:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:26.533 [2024-11-26 18:56:43.638892] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:26.533 [2024-11-26 18:56:43.638959] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736560 ] 00:06:26.792 [2024-11-26 18:56:43.828525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.792 [2024-11-26 18:56:43.866578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.792 [2024-11-26 18:56:43.925523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.792 [2024-11-26 18:56:43.941671] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:26.792 INFO: Running with entropic power schedule (0xFF, 100). 00:06:26.792 INFO: Seed: 425145152 00:06:26.792 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:26.792 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:26.792 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:26.792 INFO: A corpus is not provided, starting from an empty corpus 00:06:26.792 #2 INITED exec/s: 0 rss: 67Mb 00:06:26.792 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:26.792 This may also happen if the target rejected all inputs we tried so far 00:06:26.792 [2024-11-26 18:56:43.986535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.792 [2024-11-26 18:56:43.986569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.310 NEW_FUNC[1/716]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:27.310 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:27.310 #13 NEW cov: 12235 ft: 12232 corp: 2/117b lim: 320 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 InsertRepeatedBytes- 00:06:27.310 [2024-11-26 18:56:44.357656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.310 [2024-11-26 18:56:44.357699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.310 [2024-11-26 18:56:44.357732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.310 [2024-11-26 18:56:44.357748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.310 [2024-11-26 18:56:44.357777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.310 [2024-11-26 18:56:44.357792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.310 #14 NEW cov: 12372 ft: 13056 corp: 3/336b lim: 320 exec/s: 0 rss: 74Mb L: 219/219 MS: 1 CopyPart- 00:06:27.310 [2024-11-26 18:56:44.447631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.310 [2024-11-26 18:56:44.447664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.310 #15 NEW cov: 12378 ft: 13338 corp: 4/453b lim: 320 exec/s: 0 rss: 75Mb L: 117/219 MS: 1 InsertByte- 00:06:27.310 [2024-11-26 18:56:44.507786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.310 [2024-11-26 18:56:44.507818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.569 #16 NEW cov: 12463 ft: 13630 corp: 5/527b lim: 320 exec/s: 0 rss: 75Mb L: 74/219 MS: 1 EraseBytes- 00:06:27.569 [2024-11-26 18:56:44.567910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.569 [2024-11-26 18:56:44.567940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.569 #17 NEW cov: 12463 ft: 13698 corp: 6/627b lim: 320 exec/s: 0 rss: 75Mb L: 100/219 MS: 1 InsertRepeatedBytes- 00:06:27.569 [2024-11-26 18:56:44.658223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.569 [2024-11-26 18:56:44.658252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.569 [2024-11-26 18:56:44.658282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.569 [2024-11-26 18:56:44.658297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.569 [2024-11-26 18:56:44.658340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.569 [2024-11-26 18:56:44.658361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.569 #18 NEW cov: 12463 ft: 13726 corp: 7/846b lim: 320 exec/s: 0 rss: 75Mb L: 219/219 MS: 1 ChangeBinInt- 00:06:27.569 [2024-11-26 18:56:44.748355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.569 [2024-11-26 18:56:44.748384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.830 #19 NEW cov: 12463 ft: 13871 corp: 8/920b lim: 320 exec/s: 0 rss: 75Mb L: 74/219 MS: 1 ChangeBinInt- 00:06:27.830 [2024-11-26 18:56:44.808529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.831 [2024-11-26 18:56:44.808560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.831 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:27.831 #25 NEW cov: 12480 ft: 13984 corp: 9/1020b lim: 320 exec/s: 0 rss: 75Mb L: 100/219 MS: 1 ChangeBinInt- 00:06:27.831 [2024-11-26 18:56:44.898796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.831 [2024-11-26 18:56:44.898827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.831 #26 NEW cov: 12480 ft: 14033 corp: 10/1094b lim: 320 exec/s: 0 rss: 75Mb L: 74/219 MS: 1 CopyPart- 00:06:27.831 [2024-11-26 18:56:44.989012] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.831 [2024-11-26 18:56:44.989041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.831 #27 NEW cov: 12480 ft: 14071 corp: 11/1168b lim: 320 exec/s: 27 rss: 75Mb L: 74/219 MS: 1 ChangeBit- 00:06:27.831 [2024-11-26 18:56:45.039191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.831 [2024-11-26 18:56:45.039223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.091 #28 NEW cov: 12480 ft: 14106 corp: 12/1242b lim: 320 exec/s: 28 rss: 75Mb L: 74/219 MS: 1 ChangeBinInt- 00:06:28.091 [2024-11-26 18:56:45.099291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00470000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.091 [2024-11-26 18:56:45.099321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.091 #29 NEW cov: 12480 ft: 14126 corp: 13/1316b lim: 320 exec/s: 29 rss: 75Mb L: 74/219 MS: 1 ChangeByte- 00:06:28.091 [2024-11-26 18:56:45.149410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x230000 00:06:28.091 [2024-11-26 18:56:45.149439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.091 #30 NEW cov: 12480 ft: 14134 corp: 14/1391b lim: 320 exec/s: 30 rss: 75Mb L: 75/219 MS: 1 InsertByte- 00:06:28.091 [2024-11-26 18:56:45.239761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.091 [2024-11-26 18:56:45.239790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.091 #31 NEW cov: 12480 ft: 14150 corp: 15/1479b lim: 320 exec/s: 31 rss: 75Mb L: 88/219 MS: 1 CopyPart- 00:06:28.091 [2024-11-26 18:56:45.289781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.091 [2024-11-26 18:56:45.289810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.350 #32 NEW cov: 12480 ft: 14162 corp: 16/1550b lim: 320 exec/s: 32 rss: 75Mb L: 71/219 MS: 1 EraseBytes- 00:06:28.350 [2024-11-26 18:56:45.380028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.350 [2024-11-26 18:56:45.380059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.350 #33 NEW cov: 12480 ft: 14171 corp: 17/1650b lim: 320 exec/s: 33 rss: 75Mb L: 100/219 MS: 1 CrossOver- 00:06:28.350 [2024-11-26 18:56:45.470265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.350 [2024-11-26 18:56:45.470295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.350 #34 NEW cov: 12480 ft: 14241 corp: 18/1722b lim: 320 exec/s: 34 rss: 75Mb L: 72/219 MS: 1 EraseBytes- 00:06:28.350 [2024-11-26 18:56:45.520379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.350 [2024-11-26 18:56:45.520409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.350 #35 NEW cov: 12480 ft: 14300 corp: 19/1796b lim: 320 exec/s: 35 rss: 75Mb L: 74/219 MS: 1 ChangeBinInt- 00:06:28.609 [2024-11-26 18:56:45.570521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.609 [2024-11-26 18:56:45.570553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.609 #36 NEW cov: 12480 ft: 14317 corp: 20/1885b lim: 320 exec/s: 36 rss: 75Mb L: 89/219 MS: 1 CrossOver- 00:06:28.609 [2024-11-26 18:56:45.660797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:d3d3d3d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.609 [2024-11-26 18:56:45.660828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.609 [2024-11-26 18:56:45.660862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d3) qid:0 cid:5 nsid:d3d3d3d3 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd3d3d3d3d3d3d3d3 00:06:28.609 [2024-11-26 18:56:45.660878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.609 NEW_FUNC[1/1]: 0x153f738 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:06:28.609 #37 NEW cov: 12511 ft: 14509 corp: 21/2024b lim: 320 exec/s: 37 rss: 75Mb L: 139/219 MS: 1 InsertRepeatedBytes- 00:06:28.609 [2024-11-26 18:56:45.761113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.609 [2024-11-26 18:56:45.761144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.609 [2024-11-26 18:56:45.761174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.609 [2024-11-26 18:56:45.761190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.609 [2024-11-26 18:56:45.761216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.609 [2024-11-26 18:56:45.761231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.609 #38 NEW cov: 12511 ft: 14569 corp: 22/2251b lim: 320 exec/s: 38 rss: 75Mb L: 227/227 MS: 1 InsertRepeatedBytes- 00:06:28.868 [2024-11-26 18:56:45.821267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.868 [2024-11-26 18:56:45.821300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.868 #39 NEW cov: 12518 ft: 14612 corp: 23/2338b lim: 320 exec/s: 39 rss: 76Mb L: 87/227 MS: 1 EraseBytes- 00:06:28.868 [2024-11-26 18:56:45.911477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.868 [2024-11-26 18:56:45.911508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.868 #40 NEW cov: 12518 ft: 14651 corp: 24/2438b lim: 320 exec/s: 40 rss: 76Mb L: 100/227 MS: 1 ChangeByte- 00:06:28.868 [2024-11-26 18:56:45.961589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.868 [2024-11-26 18:56:45.961619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.868 #41 NEW cov: 12518 ft: 14687 corp: 25/2509b lim: 320 exec/s: 20 rss: 76Mb L: 71/227 MS: 1 EraseBytes- 00:06:28.868 #41 DONE cov: 12518 ft: 14687 corp: 25/2509b lim: 320 exec/s: 20 rss: 76Mb 00:06:28.868 Done 41 runs in 2 second(s) 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:29.126 18:56:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:29.126 [2024-11-26 18:56:46.146559] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:29.126 [2024-11-26 18:56:46.146643] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736917 ] 00:06:29.126 [2024-11-26 18:56:46.332912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.385 [2024-11-26 18:56:46.372203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.385 [2024-11-26 18:56:46.431184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.385 [2024-11-26 18:56:46.447330] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:29.385 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.385 INFO: Seed: 2930144877 00:06:29.385 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:29.385 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:29.385 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:29.385 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.385 #2 INITED exec/s: 0 rss: 67Mb 00:06:29.385 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.385 This may also happen if the target rejected all inputs we tried so far 00:06:29.385 [2024-11-26 18:56:46.512741] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.385 [2024-11-26 18:56:46.512865] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.385 [2024-11-26 18:56:46.512975] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.385 [2024-11-26 18:56:46.513081] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.385 [2024-11-26 18:56:46.513297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.385 [2024-11-26 18:56:46.513327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.385 [2024-11-26 18:56:46.513383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.385 [2024-11-26 18:56:46.513398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.385 [2024-11-26 18:56:46.513450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.385 [2024-11-26 18:56:46.513464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.385 [2024-11-26 18:56:46.513521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.385 [2024-11-26 18:56:46.513535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.643 NEW_FUNC[1/717]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:29.643 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.643 #10 NEW cov: 12318 ft: 12319 corp: 2/25b lim: 30 exec/s: 0 rss: 74Mb L: 24/24 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:29.643 [2024-11-26 18:56:46.853860] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.643 [2024-11-26 18:56:46.854001] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.643 [2024-11-26 18:56:46.854126] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.643 [2024-11-26 18:56:46.854397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.643 [2024-11-26 18:56:46.854453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.643 [2024-11-26 18:56:46.854549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.643 [2024-11-26 18:56:46.854578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.643 [2024-11-26 18:56:46.854659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.643 [2024-11-26 18:56:46.854691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.902 #11 NEW cov: 12431 ft: 13458 corp: 3/47b lim: 30 exec/s: 0 rss: 74Mb L: 22/24 MS: 1 EraseBytes- 00:06:29.902 [2024-11-26 18:56:46.923801] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:46.923929] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:46.924042] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:46.924156] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:46.924376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:46.924402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:46.924456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:46.924476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:46.924535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:46.924550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:46.924604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:46.924618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.902 #17 NEW cov: 12437 ft: 13816 corp: 4/71b lim: 30 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ShuffleBytes- 00:06:29.902 [2024-11-26 18:56:46.963859] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:46.963981] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:46.964095] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:46.964304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:46.964330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:46.964390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:46.964404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:46.964461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:46.964480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.902 #23 NEW cov: 12522 ft: 14048 corp: 5/93b lim: 30 exec/s: 0 rss: 75Mb L: 22/24 MS: 1 ShuffleBytes- 00:06:29.902 [2024-11-26 18:56:47.024051] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:47.024175] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:47.024290] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff95 00:06:29.902 [2024-11-26 18:56:47.024405] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:29.902 [2024-11-26 18:56:47.024629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:47.024655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:47.024712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:47.024726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:47.024782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:959583ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:47.024796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:47.024851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:47.024866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.902 #24 NEW cov: 12522 ft: 14148 corp: 6/120b lim: 30 exec/s: 0 rss: 75Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:29.902 [2024-11-26 18:56:47.084106] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534572) > buf size (4096) 00:06:29.902 [2024-11-26 18:56:47.084433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:47.084459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.902 [2024-11-26 18:56:47.084520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.902 [2024-11-26 18:56:47.084535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.902 #27 NEW cov: 12562 ft: 14629 corp: 7/137b lim: 30 exec/s: 0 rss: 75Mb L: 17/27 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:06:30.161 [2024-11-26 18:56:47.124274] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.124396] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.124515] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.124743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.124769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.124830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.124844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.124901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.124916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.161 #28 NEW cov: 12562 ft: 14727 corp: 8/155b lim: 30 exec/s: 0 rss: 75Mb L: 18/27 MS: 1 EraseBytes- 00:06:30.161 [2024-11-26 18:56:47.164437] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.164568] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.164680] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.164792] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.165013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:240281dd cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.165038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.165094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.165109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.165166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.165181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.165236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.165251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.161 #29 NEW cov: 12562 ft: 14824 corp: 9/180b lim: 30 exec/s: 0 rss: 75Mb L: 25/27 MS: 1 InsertByte- 00:06:30.161 [2024-11-26 18:56:47.204641] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.204768] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.204886] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff95 00:06:30.161 [2024-11-26 18:56:47.205001] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.205234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.205265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.205326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.205343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.205402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:959583ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.205419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.205478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.205494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.161 #30 NEW cov: 12562 ft: 14931 corp: 10/208b lim: 30 exec/s: 0 rss: 75Mb L: 28/28 MS: 1 InsertByte- 00:06:30.161 [2024-11-26 18:56:47.264702] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.264828] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.264942] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.265055] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.265268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:240281dd cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.265294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.265350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.265364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.265419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.265434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.265492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.265506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.161 #31 NEW cov: 12562 ft: 14995 corp: 11/233b lim: 30 exec/s: 0 rss: 75Mb L: 25/28 MS: 1 ShuffleBytes- 00:06:30.161 [2024-11-26 18:56:47.324856] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.324975] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.325085] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.161 [2024-11-26 18:56:47.325300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.325326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.161 [2024-11-26 18:56:47.325381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95288195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.161 [2024-11-26 18:56:47.325396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.162 [2024-11-26 18:56:47.325453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.162 [2024-11-26 18:56:47.325468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.162 #32 NEW cov: 12562 ft: 15068 corp: 12/251b lim: 30 exec/s: 0 rss: 75Mb L: 18/28 MS: 1 ChangeByte- 00:06:30.420 [2024-11-26 18:56:47.384959] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534572) > buf size (4096) 00:06:30.420 [2024-11-26 18:56:47.385300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.385326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.420 [2024-11-26 18:56:47.385383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.385398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.420 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:30.420 #33 NEW cov: 12585 ft: 15140 corp: 13/268b lim: 30 exec/s: 0 rss: 75Mb L: 17/28 MS: 1 ShuffleBytes- 00:06:30.420 [2024-11-26 18:56:47.445099] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (26700) > buf size (4096) 00:06:30.420 [2024-11-26 18:56:47.445320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1a12001a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.445345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.420 #38 NEW cov: 12585 ft: 15554 corp: 14/274b lim: 30 exec/s: 0 rss: 75Mb L: 6/28 MS: 5 ChangeBit-CopyPart-CopyPart-ChangeBinInt-CrossOver- 00:06:30.420 [2024-11-26 18:56:47.485368] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.420 [2024-11-26 18:56:47.485496] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:06:30.420 [2024-11-26 18:56:47.485614] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.420 [2024-11-26 18:56:47.485724] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.420 [2024-11-26 18:56:47.485834] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.420 [2024-11-26 18:56:47.486056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.486082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.420 [2024-11-26 18:56:47.486140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.486155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.420 [2024-11-26 18:56:47.486209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e781e7 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.486223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.420 [2024-11-26 18:56:47.486279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.486293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.420 [2024-11-26 18:56:47.486347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.420 [2024-11-26 18:56:47.486360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:30.420 #39 NEW cov: 12585 ft: 15641 corp: 15/304b lim: 30 exec/s: 39 rss: 75Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:30.420 [2024-11-26 18:56:47.525412] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.420 [2024-11-26 18:56:47.525536] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.420 [2024-11-26 18:56:47.525651] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.420 [2024-11-26 18:56:47.525882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.525907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.421 [2024-11-26 18:56:47.525964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.525979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.421 [2024-11-26 18:56:47.526035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9595812c cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.526053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.421 #40 NEW cov: 12585 ft: 15661 corp: 16/326b lim: 30 exec/s: 40 rss: 75Mb L: 22/30 MS: 1 ChangeByte- 00:06:30.421 [2024-11-26 18:56:47.565462] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534572) > buf size (4096) 00:06:30.421 [2024-11-26 18:56:47.565606] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:30.421 [2024-11-26 18:56:47.565828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.565853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.421 [2024-11-26 18:56:47.565909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.565924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.421 #41 NEW cov: 12585 ft: 15707 corp: 17/343b lim: 30 exec/s: 41 rss: 75Mb L: 17/30 MS: 1 CopyPart- 00:06:30.421 [2024-11-26 18:56:47.605692] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009582 00:06:30.421 [2024-11-26 18:56:47.605810] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:06:30.421 [2024-11-26 18:56:47.605924] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.421 [2024-11-26 18:56:47.606039] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.421 [2024-11-26 18:56:47.606149] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.421 [2024-11-26 18:56:47.606366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.606391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.421 [2024-11-26 18:56:47.606451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.606466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.421 [2024-11-26 18:56:47.606524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e781e7 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.606538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.421 [2024-11-26 18:56:47.606593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.606607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.421 [2024-11-26 18:56:47.606663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.421 [2024-11-26 18:56:47.606677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:30.680 #42 NEW cov: 12585 ft: 15738 corp: 18/373b lim: 30 exec/s: 42 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:06:30.680 [2024-11-26 18:56:47.665891] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.666013] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (152580) > buf size (4096) 00:06:30.680 [2024-11-26 18:56:47.666138] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.666254] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.666368] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.666589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.666615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.680 [2024-11-26 18:56:47.666676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.666691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.680 [2024-11-26 18:56:47.666747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.666762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.680 [2024-11-26 18:56:47.666818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.666833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.680 [2024-11-26 18:56:47.666888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.666903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:30.680 #43 NEW cov: 12585 ft: 15758 corp: 19/403b lim: 30 exec/s: 43 rss: 75Mb L: 30/30 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:30.680 [2024-11-26 18:56:47.705855] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000002dd 00:06:30.680 [2024-11-26 18:56:47.706070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.706095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.680 #44 NEW cov: 12585 ft: 15794 corp: 20/411b lim: 30 exec/s: 44 rss: 75Mb L: 8/30 MS: 1 CrossOver- 00:06:30.680 [2024-11-26 18:56:47.746171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.746196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.680 #49 NEW cov: 12585 ft: 15845 corp: 21/420b lim: 30 exec/s: 49 rss: 75Mb L: 9/30 MS: 5 ShuffleBytes-ChangeBit-ChangeBinInt-ShuffleBytes-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:30.680 [2024-11-26 18:56:47.786186] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.786306] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.786417] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.786536] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.680 [2024-11-26 18:56:47.786755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.786780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.680 [2024-11-26 18:56:47.786837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.786855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.680 [2024-11-26 18:56:47.786910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.786924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.680 [2024-11-26 18:56:47.786981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958115 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.680 [2024-11-26 18:56:47.786995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.680 #50 NEW cov: 12585 ft: 15853 corp: 22/444b lim: 30 exec/s: 50 rss: 75Mb L: 24/30 MS: 1 ChangeBit- 00:06:30.681 [2024-11-26 18:56:47.826256] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.681 [2024-11-26 18:56:47.826379] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.681 [2024-11-26 18:56:47.826521] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.681 [2024-11-26 18:56:47.826747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.681 [2024-11-26 18:56:47.826772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.681 [2024-11-26 18:56:47.826830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.681 [2024-11-26 18:56:47.826844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.681 [2024-11-26 18:56:47.826897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.681 [2024-11-26 18:56:47.826911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.681 #51 NEW cov: 12585 ft: 15893 corp: 23/463b lim: 30 exec/s: 51 rss: 75Mb L: 19/30 MS: 1 EraseBytes- 00:06:30.681 [2024-11-26 18:56:47.866359] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534572) > buf size (4096) 00:06:30.681 [2024-11-26 18:56:47.866486] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (16128) > len (4) 00:06:30.681 [2024-11-26 18:56:47.866809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.681 [2024-11-26 18:56:47.866834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.681 [2024-11-26 18:56:47.866894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.681 [2024-11-26 18:56:47.866909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.681 [2024-11-26 18:56:47.866965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.681 [2024-11-26 18:56:47.866980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.939 #52 NEW cov: 12591 ft: 15988 corp: 24/481b lim: 30 exec/s: 52 rss: 75Mb L: 18/30 MS: 1 InsertByte- 00:06:30.939 [2024-11-26 18:56:47.926574] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534572) > buf size (4096) 00:06:30.939 [2024-11-26 18:56:47.927012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.939 [2024-11-26 18:56:47.927041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.939 [2024-11-26 18:56:47.927100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.939 [2024-11-26 18:56:47.927115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.939 [2024-11-26 18:56:47.927172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.939 [2024-11-26 18:56:47.927187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.939 #53 NEW cov: 12591 ft: 16029 corp: 25/499b lim: 30 exec/s: 53 rss: 75Mb L: 18/30 MS: 1 CopyPart- 00:06:30.939 [2024-11-26 18:56:47.966785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00b40000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.939 [2024-11-26 18:56:47.966811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.939 #54 NEW cov: 12591 ft: 16044 corp: 26/508b lim: 30 exec/s: 54 rss: 76Mb L: 9/30 MS: 1 ChangeByte- 00:06:30.939 [2024-11-26 18:56:48.026821] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000095 00:06:30.940 [2024-11-26 18:56:48.026940] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.940 [2024-11-26 18:56:48.027053] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.940 [2024-11-26 18:56:48.027273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8395 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.027299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.027357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.027372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.027428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.027443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.940 #55 NEW cov: 12591 ft: 16049 corp: 27/527b lim: 30 exec/s: 55 rss: 76Mb L: 19/30 MS: 1 ChangeBinInt- 00:06:30.940 [2024-11-26 18:56:48.087029] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.940 [2024-11-26 18:56:48.087148] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.940 [2024-11-26 18:56:48.087261] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (152692) > buf size (4096) 00:06:30.940 [2024-11-26 18:56:48.087373] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:30.940 [2024-11-26 18:56:48.087595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.087621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.087678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.087692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.087752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:951c001c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.087766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.087823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.087837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.940 #56 NEW cov: 12591 ft: 16091 corp: 28/551b lim: 30 exec/s: 56 rss: 76Mb L: 24/30 MS: 1 InsertRepeatedBytes- 00:06:30.940 [2024-11-26 18:56:48.127162] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534572) > buf size (4096) 00:06:30.940 [2024-11-26 18:56:48.127707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.127732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.127790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.127805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.127861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.127875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.940 [2024-11-26 18:56:48.127933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.940 [2024-11-26 18:56:48.127948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.199 #57 NEW cov: 12591 ft: 16139 corp: 29/577b lim: 30 exec/s: 57 rss: 76Mb L: 26/30 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:31.199 [2024-11-26 18:56:48.187302] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000095 00:06:31.199 [2024-11-26 18:56:48.187423] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.199 [2024-11-26 18:56:48.187547] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.199 [2024-11-26 18:56:48.187783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8395 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.199 [2024-11-26 18:56:48.187808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.199 [2024-11-26 18:56:48.187865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.199 [2024-11-26 18:56:48.187880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.199 [2024-11-26 18:56:48.187938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.199 [2024-11-26 18:56:48.187953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.199 #58 NEW cov: 12591 ft: 16172 corp: 30/596b lim: 30 exec/s: 58 rss: 76Mb L: 19/30 MS: 1 CrossOver- 00:06:31.199 [2024-11-26 18:56:48.247485] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.199 [2024-11-26 18:56:48.247617] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (415320) > buf size (4096) 00:06:31.199 [2024-11-26 18:56:48.247836] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.200 [2024-11-26 18:56:48.248055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:240281dd cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.248081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.248137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.248152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.248207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.248222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.248279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.248294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.200 #59 NEW cov: 12591 ft: 16193 corp: 31/621b lim: 30 exec/s: 59 rss: 76Mb L: 25/30 MS: 1 ChangeBinInt- 00:06:31.200 [2024-11-26 18:56:48.287652] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.200 [2024-11-26 18:56:48.287775] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:06:31.200 [2024-11-26 18:56:48.287892] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.200 [2024-11-26 18:56:48.288004] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.200 [2024-11-26 18:56:48.288114] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x9595 00:06:31.200 [2024-11-26 18:56:48.288340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.288365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.288422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.288437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.288493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e781e7 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.288509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.288562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.288577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.288630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:95950095 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.288645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.200 #60 NEW cov: 12591 ft: 16238 corp: 32/651b lim: 30 exec/s: 60 rss: 76Mb L: 30/30 MS: 1 CrossOver- 00:06:31.200 [2024-11-26 18:56:48.327844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.327870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.200 #63 NEW cov: 12591 ft: 16269 corp: 33/657b lim: 30 exec/s: 63 rss: 76Mb L: 6/30 MS: 3 EraseBytes-ShuffleBytes-CopyPart- 00:06:31.200 [2024-11-26 18:56:48.367801] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000095 00:06:31.200 [2024-11-26 18:56:48.367923] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.200 [2024-11-26 18:56:48.368036] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.200 [2024-11-26 18:56:48.368252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd838b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.368278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.368336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.368350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.368408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.368422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.200 #64 NEW cov: 12591 ft: 16284 corp: 34/676b lim: 30 exec/s: 64 rss: 76Mb L: 19/30 MS: 1 ChangeBinInt- 00:06:31.200 [2024-11-26 18:56:48.407972] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xeaea 00:06:31.200 [2024-11-26 18:56:48.408097] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000eaea 00:06:31.200 [2024-11-26 18:56:48.408214] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000eaea 00:06:31.200 [2024-11-26 18:56:48.408332] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (240556) > buf size (4096) 00:06:31.200 [2024-11-26 18:56:48.408584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00b40000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.408610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.408666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:eaea02ea cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.408680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.408732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:eaea02ea cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.408746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.200 [2024-11-26 18:56:48.408799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:eaea00ea cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.200 [2024-11-26 18:56:48.408814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.459 #65 NEW cov: 12591 ft: 16302 corp: 35/702b lim: 30 exec/s: 65 rss: 76Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:06:31.459 [2024-11-26 18:56:48.468125] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002a2a 00:06:31.459 [2024-11-26 18:56:48.468264] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.459 [2024-11-26 18:56:48.468411] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.459 [2024-11-26 18:56:48.468542] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009595 00:06:31.459 [2024-11-26 18:56:48.468772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02dd022a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.459 [2024-11-26 18:56:48.468798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.459 [2024-11-26 18:56:48.468854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2a2a8195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.459 [2024-11-26 18:56:48.468868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.459 [2024-11-26 18:56:48.468923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.459 [2024-11-26 18:56:48.468937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.459 [2024-11-26 18:56:48.468992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:95958195 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.459 [2024-11-26 18:56:48.469006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.459 #66 NEW cov: 12591 ft: 16327 corp: 36/726b lim: 30 exec/s: 33 rss: 76Mb L: 24/30 MS: 1 InsertRepeatedBytes- 00:06:31.459 #66 DONE cov: 12591 ft: 16327 corp: 36/726b lim: 30 exec/s: 33 rss: 76Mb 00:06:31.459 ###### Recommended dictionary. ###### 00:06:31.459 "\000\000\000\000\000\000\000\000" # Uses: 2 00:06:31.459 ###### End of recommended dictionary. ###### 00:06:31.459 Done 66 runs in 2 second(s) 00:06:31.459 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.459 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.459 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.459 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.460 18:56:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:31.460 [2024-11-26 18:56:48.644333] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:31.460 [2024-11-26 18:56:48.644401] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737271 ] 00:06:31.718 [2024-11-26 18:56:48.834937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.718 [2024-11-26 18:56:48.873976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.977 [2024-11-26 18:56:48.932979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.977 [2024-11-26 18:56:48.949116] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:31.977 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.977 INFO: Seed: 1135183517 00:06:31.977 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:31.977 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:31.977 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:31.977 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.977 #2 INITED exec/s: 0 rss: 67Mb 00:06:31.977 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.977 This may also happen if the target rejected all inputs we tried so far 00:06:31.977 [2024-11-26 18:56:49.019988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.977 [2024-11-26 18:56:49.020025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.238 NEW_FUNC[1/716]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:32.238 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:32.239 #11 NEW cov: 12274 ft: 12258 corp: 2/11b lim: 35 exec/s: 0 rss: 74Mb L: 10/10 MS: 4 CrossOver-InsertByte-CMP-CopyPart- DE: "\001\003"- 00:06:32.239 #12 NEW cov: 12387 ft: 13064 corp: 3/21b lim: 35 exec/s: 0 rss: 75Mb L: 10/10 MS: 1 CMP- DE: "\001\000\000\000\000\000\004\000"- 00:06:32.239 [2024-11-26 18:56:49.431041] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.239 [2024-11-26 18:56:49.431331] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.239 [2024-11-26 18:56:49.431632] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.239 [2024-11-26 18:56:49.432102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.239 [2024-11-26 18:56:49.432141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.239 [2024-11-26 18:56:49.432210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.239 [2024-11-26 18:56:49.432228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.239 [2024-11-26 18:56:49.432311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.239 [2024-11-26 18:56:49.432329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.239 [2024-11-26 18:56:49.432412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.239 [2024-11-26 18:56:49.432430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.497 #13 NEW cov: 12404 ft: 13979 corp: 4/51b lim: 35 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:32.497 #14 NEW cov: 12489 ft: 14269 corp: 5/58b lim: 35 exec/s: 0 rss: 75Mb L: 7/30 MS: 1 EraseBytes- 00:06:32.497 [2024-11-26 18:56:49.561475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.497 [2024-11-26 18:56:49.561502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.497 #15 NEW cov: 12489 ft: 14442 corp: 6/68b lim: 35 exec/s: 0 rss: 75Mb L: 10/30 MS: 1 CrossOver- 00:06:32.497 [2024-11-26 18:56:49.611640] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.497 [2024-11-26 18:56:49.612180] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.497 [2024-11-26 18:56:49.612652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.497 [2024-11-26 18:56:49.612680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.497 [2024-11-26 18:56:49.612747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:8c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.497 [2024-11-26 18:56:49.612767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.497 [2024-11-26 18:56:49.612851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:8c00008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.497 [2024-11-26 18:56:49.612869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.497 [2024-11-26 18:56:49.612958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.497 [2024-11-26 18:56:49.612976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.497 #16 NEW cov: 12489 ft: 14546 corp: 7/102b lim: 35 exec/s: 0 rss: 75Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:32.497 [2024-11-26 18:56:49.681819] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.497 [2024-11-26 18:56:49.682397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.497 [2024-11-26 18:56:49.682425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.497 [2024-11-26 18:56:49.682488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.497 [2024-11-26 18:56:49.682505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.756 #17 NEW cov: 12489 ft: 14903 corp: 8/120b lim: 35 exec/s: 0 rss: 75Mb L: 18/34 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:32.756 [2024-11-26 18:56:49.762142] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.756 [2024-11-26 18:56:49.762643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:ca004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.762674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.756 [2024-11-26 18:56:49.762741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.762762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.756 #18 NEW cov: 12489 ft: 14923 corp: 9/138b lim: 35 exec/s: 0 rss: 75Mb L: 18/34 MS: 1 ChangeByte- 00:06:32.756 [2024-11-26 18:56:49.832637] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.756 [2024-11-26 18:56:49.832935] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.756 [2024-11-26 18:56:49.833234] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.756 [2024-11-26 18:56:49.833763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.833790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.756 [2024-11-26 18:56:49.833854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.833872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.756 [2024-11-26 18:56:49.833945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.833964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.756 [2024-11-26 18:56:49.834050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.834067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.756 #19 NEW cov: 12489 ft: 14959 corp: 10/172b lim: 35 exec/s: 0 rss: 75Mb L: 34/34 MS: 1 CMP- DE: "\011\000\000\000"- 00:06:32.756 [2024-11-26 18:56:49.882683] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.756 [2024-11-26 18:56:49.883216] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.756 [2024-11-26 18:56:49.883710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.883737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.756 [2024-11-26 18:56:49.883801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:8c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.883819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.756 [2024-11-26 18:56:49.883906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:8c00008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.883922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.756 [2024-11-26 18:56:49.884006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.756 [2024-11-26 18:56:49.884023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.757 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:32.757 #20 NEW cov: 12512 ft: 15019 corp: 11/206b lim: 35 exec/s: 0 rss: 75Mb L: 34/34 MS: 1 ChangeByte- 00:06:32.757 [2024-11-26 18:56:49.963344] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.757 [2024-11-26 18:56:49.964169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.757 [2024-11-26 18:56:49.964196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.757 [2024-11-26 18:56:49.964346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:8c00008c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.757 [2024-11-26 18:56:49.964365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.757 [2024-11-26 18:56:49.964451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0000008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.757 [2024-11-26 18:56:49.964467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.015 #21 NEW cov: 12512 ft: 15196 corp: 12/237b lim: 35 exec/s: 21 rss: 75Mb L: 31/34 MS: 1 CrossOver- 00:06:33.015 [2024-11-26 18:56:50.043363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.015 [2024-11-26 18:56:50.043392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.015 #22 NEW cov: 12512 ft: 15207 corp: 13/247b lim: 35 exec/s: 22 rss: 75Mb L: 10/34 MS: 1 ChangeBinInt- 00:06:33.015 [2024-11-26 18:56:50.093685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.015 [2024-11-26 18:56:50.093710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.015 #23 NEW cov: 12512 ft: 15222 corp: 14/257b lim: 35 exec/s: 23 rss: 75Mb L: 10/34 MS: 1 CrossOver- 00:06:33.015 [2024-11-26 18:56:50.143674] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.015 [2024-11-26 18:56:50.144167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400e000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.015 [2024-11-26 18:56:50.144193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.015 [2024-11-26 18:56:50.144247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.015 [2024-11-26 18:56:50.144266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.015 #24 NEW cov: 12512 ft: 15250 corp: 15/275b lim: 35 exec/s: 24 rss: 75Mb L: 18/34 MS: 1 ChangeBit- 00:06:33.015 [2024-11-26 18:56:50.193842] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.015 [2024-11-26 18:56:50.194347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400e000a cdw11:03001200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.015 [2024-11-26 18:56:50.194374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.015 [2024-11-26 18:56:50.194443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.015 [2024-11-26 18:56:50.194463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.274 #25 NEW cov: 12512 ft: 15280 corp: 16/293b lim: 35 exec/s: 25 rss: 75Mb L: 18/34 MS: 1 ChangeBinInt- 00:06:33.274 [2024-11-26 18:56:50.264094] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.274 [2024-11-26 18:56:50.264648] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.274 [2024-11-26 18:56:50.265182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.265212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.265264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:8c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.265282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.265365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:8c00008c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.265382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.265465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.265504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.274 #26 NEW cov: 12512 ft: 15356 corp: 17/327b lim: 35 exec/s: 26 rss: 75Mb L: 34/34 MS: 1 ChangeBinInt- 00:06:33.274 [2024-11-26 18:56:50.314211] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.274 [2024-11-26 18:56:50.314750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400e000a cdw11:0a001203 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.314776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.314833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.314851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.274 #27 NEW cov: 12512 ft: 15368 corp: 18/344b lim: 35 exec/s: 27 rss: 75Mb L: 17/34 MS: 1 EraseBytes- 00:06:33.274 [2024-11-26 18:56:50.385618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:97970079 cdw11:97009797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.385643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.385692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:97970097 cdw11:97009797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.385708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.385788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:97970097 cdw11:97009797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.385806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.385888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:97970097 cdw11:97009797 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.385905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.274 #32 NEW cov: 12512 ft: 15387 corp: 19/375b lim: 35 exec/s: 32 rss: 75Mb L: 31/34 MS: 5 CopyPart-ChangeByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:06:33.274 [2024-11-26 18:56:50.434724] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.274 [2024-11-26 18:56:50.435222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.435252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.274 [2024-11-26 18:56:50.435320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.274 [2024-11-26 18:56:50.435340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.274 #33 NEW cov: 12512 ft: 15416 corp: 20/395b lim: 35 exec/s: 33 rss: 76Mb L: 20/34 MS: 1 EraseBytes- 00:06:33.561 #34 NEW cov: 12512 ft: 15498 corp: 21/405b lim: 35 exec/s: 34 rss: 76Mb L: 10/34 MS: 1 ChangeBit- 00:06:33.561 [2024-11-26 18:56:50.555494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.561 [2024-11-26 18:56:50.555521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.561 #35 NEW cov: 12512 ft: 15510 corp: 22/416b lim: 35 exec/s: 35 rss: 76Mb L: 11/34 MS: 1 InsertByte- 00:06:33.561 [2024-11-26 18:56:50.625689] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.561 [2024-11-26 18:56:50.626200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400e000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.561 [2024-11-26 18:56:50.626227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.561 [2024-11-26 18:56:50.626300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.561 [2024-11-26 18:56:50.626319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.561 #36 NEW cov: 12512 ft: 15622 corp: 23/435b lim: 35 exec/s: 36 rss: 76Mb L: 19/34 MS: 1 InsertByte- 00:06:33.561 [2024-11-26 18:56:50.676002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.561 [2024-11-26 18:56:50.676028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.561 #37 NEW cov: 12512 ft: 15651 corp: 24/445b lim: 35 exec/s: 37 rss: 76Mb L: 10/34 MS: 1 ChangeByte- 00:06:33.561 [2024-11-26 18:56:50.726435] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.561 [2024-11-26 18:56:50.727017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:ca004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.561 [2024-11-26 18:56:50.727044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.561 [2024-11-26 18:56:50.727094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a400040 cdw11:0000010a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.561 [2024-11-26 18:56:50.727110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.561 [2024-11-26 18:56:50.727192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:030a0000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.561 [2024-11-26 18:56:50.727210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.561 #38 NEW cov: 12512 ft: 15775 corp: 25/469b lim: 35 exec/s: 38 rss: 76Mb L: 24/34 MS: 1 CrossOver- 00:06:33.819 [2024-11-26 18:56:50.796561] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.819 [2024-11-26 18:56:50.796862] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.819 [2024-11-26 18:56:50.797349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:ca004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.797378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.797443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00180000 cdw11:0000010a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.797461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.797539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:030a0000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.797557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.819 #39 NEW cov: 12512 ft: 15802 corp: 26/493b lim: 35 exec/s: 39 rss: 76Mb L: 24/34 MS: 1 ChangeBinInt- 00:06:33.819 [2024-11-26 18:56:50.867154] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.819 [2024-11-26 18:56:50.867745] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.819 [2024-11-26 18:56:50.868248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:03004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.868277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.868327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:8c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.868344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.868420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:8c00008c cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.868438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.868508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.868525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.819 #40 NEW cov: 12512 ft: 15816 corp: 27/527b lim: 35 exec/s: 40 rss: 76Mb L: 34/34 MS: 1 ChangeBit- 00:06:33.819 [2024-11-26 18:56:50.917302] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.819 [2024-11-26 18:56:50.918073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:ca004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.918101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.918160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00180000 cdw11:0000010a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.918179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.918263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.918280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.819 #41 NEW cov: 12512 ft: 15838 corp: 28/551b lim: 35 exec/s: 41 rss: 76Mb L: 24/34 MS: 1 PersAutoDict- DE: "\011\000\000\000"- 00:06:33.819 [2024-11-26 18:56:50.987523] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.819 [2024-11-26 18:56:50.988014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:400a000a cdw11:0a004001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.988045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.819 [2024-11-26 18:56:50.988108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:09000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.819 [2024-11-26 18:56:50.988127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.077 #42 NEW cov: 12512 ft: 15844 corp: 29/569b lim: 35 exec/s: 21 rss: 76Mb L: 18/34 MS: 1 EraseBytes- 00:06:34.077 #42 DONE cov: 12512 ft: 15844 corp: 29/569b lim: 35 exec/s: 21 rss: 76Mb 00:06:34.077 ###### Recommended dictionary. ###### 00:06:34.077 "\001\003" # Uses: 0 00:06:34.077 "\001\000\000\000\000\000\004\000" # Uses: 0 00:06:34.077 "\000\000\000\000\000\000\000\000" # Uses: 0 00:06:34.077 "\011\000\000\000" # Uses: 1 00:06:34.077 ###### End of recommended dictionary. ###### 00:06:34.077 Done 42 runs in 2 second(s) 00:06:34.077 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.078 18:56:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:34.078 [2024-11-26 18:56:51.181597] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:34.078 [2024-11-26 18:56:51.181668] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737628 ] 00:06:34.336 [2024-11-26 18:56:51.368783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.336 [2024-11-26 18:56:51.406829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.336 [2024-11-26 18:56:51.465824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.336 [2024-11-26 18:56:51.481978] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:34.336 INFO: Running with entropic power schedule (0xFF, 100). 00:06:34.336 INFO: Seed: 3670181630 00:06:34.336 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:34.336 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:34.336 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:34.336 INFO: A corpus is not provided, starting from an empty corpus 00:06:34.336 #2 INITED exec/s: 0 rss: 67Mb 00:06:34.336 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:34.336 This may also happen if the target rejected all inputs we tried so far 00:06:34.853 NEW_FUNC[1/705]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:34.853 NEW_FUNC[2/705]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.853 #14 NEW cov: 12183 ft: 12182 corp: 2/9b lim: 20 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:34.853 #16 NEW cov: 12296 ft: 13029 corp: 3/14b lim: 20 exec/s: 0 rss: 74Mb L: 5/8 MS: 2 CrossOver-CMP- DE: "\377\377\377\025"- 00:06:34.853 #17 NEW cov: 12302 ft: 13247 corp: 4/23b lim: 20 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertByte- 00:06:35.112 #18 NEW cov: 12404 ft: 13919 corp: 5/39b lim: 20 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 CMP- DE: "\313\307\242h\303\204I\000"- 00:06:35.112 #19 NEW cov: 12404 ft: 13980 corp: 6/58b lim: 20 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:35.112 #20 NEW cov: 12404 ft: 14050 corp: 7/63b lim: 20 exec/s: 0 rss: 75Mb L: 5/19 MS: 1 CrossOver- 00:06:35.370 #23 NEW cov: 12404 ft: 14116 corp: 8/67b lim: 20 exec/s: 0 rss: 75Mb L: 4/19 MS: 3 EraseBytes-EraseBytes-CopyPart- 00:06:35.370 #24 NEW cov: 12404 ft: 14178 corp: 9/84b lim: 20 exec/s: 0 rss: 75Mb L: 17/19 MS: 1 InsertByte- 00:06:35.370 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:35.370 #25 NEW cov: 12427 ft: 14221 corp: 10/92b lim: 20 exec/s: 0 rss: 75Mb L: 8/19 MS: 1 CrossOver- 00:06:35.370 #26 NEW cov: 12427 ft: 14260 corp: 11/110b lim: 20 exec/s: 0 rss: 75Mb L: 18/19 MS: 1 InsertRepeatedBytes- 00:06:35.628 NEW_FUNC[1/4]: 0x1379068 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3484 00:06:35.628 NEW_FUNC[2/4]: 0x1379be8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3426 00:06:35.628 #27 NEW cov: 12510 ft: 14409 corp: 12/128b lim: 20 exec/s: 27 rss: 75Mb L: 18/19 MS: 1 ChangeBinInt- 00:06:35.628 #28 NEW cov: 12510 ft: 14482 corp: 13/137b lim: 20 exec/s: 28 rss: 75Mb L: 9/19 MS: 1 ChangeBinInt- 00:06:35.628 #29 NEW cov: 12510 ft: 14573 corp: 14/145b lim: 20 exec/s: 29 rss: 75Mb L: 8/19 MS: 1 ChangeBit- 00:06:35.628 #30 NEW cov: 12510 ft: 14649 corp: 15/150b lim: 20 exec/s: 30 rss: 75Mb L: 5/19 MS: 1 CrossOver- 00:06:35.886 #31 NEW cov: 12510 ft: 14712 corp: 16/168b lim: 20 exec/s: 31 rss: 75Mb L: 18/19 MS: 1 CrossOver- 00:06:35.886 #32 NEW cov: 12510 ft: 14724 corp: 17/177b lim: 20 exec/s: 32 rss: 75Mb L: 9/19 MS: 1 InsertByte- 00:06:35.886 #33 NEW cov: 12510 ft: 14742 corp: 18/185b lim: 20 exec/s: 33 rss: 75Mb L: 8/19 MS: 1 ChangeByte- 00:06:35.886 #34 NEW cov: 12510 ft: 14754 corp: 19/203b lim: 20 exec/s: 34 rss: 75Mb L: 18/19 MS: 1 ChangeBit- 00:06:36.144 #35 NEW cov: 12510 ft: 14769 corp: 20/221b lim: 20 exec/s: 35 rss: 75Mb L: 18/19 MS: 1 ChangeBinInt- 00:06:36.144 [2024-11-26 18:56:53.191190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:36.144 [2024-11-26 18:56:53.191239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.144 NEW_FUNC[1/16]: 0x159ef88 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3649 00:06:36.144 NEW_FUNC[2/16]: 0x186c6e8 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3300 00:06:36.144 #36 NEW cov: 12756 ft: 15132 corp: 21/240b lim: 20 exec/s: 36 rss: 75Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:36.144 #37 NEW cov: 12760 ft: 15233 corp: 22/252b lim: 20 exec/s: 37 rss: 75Mb L: 12/19 MS: 1 EraseBytes- 00:06:36.403 #38 NEW cov: 12760 ft: 15278 corp: 23/272b lim: 20 exec/s: 38 rss: 75Mb L: 20/20 MS: 1 InsertByte- 00:06:36.403 [2024-11-26 18:56:53.451883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:36.403 [2024-11-26 18:56:53.451921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.403 #39 NEW cov: 12760 ft: 15305 corp: 24/290b lim: 20 exec/s: 19 rss: 75Mb L: 18/20 MS: 1 InsertRepeatedBytes- 00:06:36.403 #39 DONE cov: 12760 ft: 15305 corp: 24/290b lim: 20 exec/s: 19 rss: 75Mb 00:06:36.403 ###### Recommended dictionary. ###### 00:06:36.404 "\377\377\377\025" # Uses: 0 00:06:36.404 "\313\307\242h\303\204I\000" # Uses: 0 00:06:36.404 ###### End of recommended dictionary. ###### 00:06:36.404 Done 39 runs in 2 second(s) 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.662 18:56:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:36.662 [2024-11-26 18:56:53.681492] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:36.662 [2024-11-26 18:56:53.681560] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737892 ] 00:06:36.662 [2024-11-26 18:56:53.872561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.920 [2024-11-26 18:56:53.911423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.920 [2024-11-26 18:56:53.970612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.920 [2024-11-26 18:56:53.986758] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:36.920 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.920 INFO: Seed: 1880234307 00:06:36.920 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:36.920 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:36.920 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:36.920 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.920 #2 INITED exec/s: 0 rss: 67Mb 00:06:36.920 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.920 This may also happen if the target rejected all inputs we tried so far 00:06:36.920 [2024-11-26 18:56:54.052812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.920 [2024-11-26 18:56:54.052839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.920 [2024-11-26 18:56:54.052913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.921 [2024-11-26 18:56:54.052927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.921 [2024-11-26 18:56:54.052980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.921 [2024-11-26 18:56:54.052994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.921 [2024-11-26 18:56:54.053048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.921 [2024-11-26 18:56:54.053061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.179 NEW_FUNC[1/716]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:37.179 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.179 #10 NEW cov: 12293 ft: 12295 corp: 2/30b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:06:37.438 [2024-11-26 18:56:54.393315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.393376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.438 NEW_FUNC[1/1]: 0x17af0a8 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1571 00:06:37.438 #16 NEW cov: 12409 ft: 13674 corp: 3/40b lim: 35 exec/s: 0 rss: 74Mb L: 10/29 MS: 1 InsertRepeatedBytes- 00:06:37.438 [2024-11-26 18:56:54.443700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.443727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.443785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.443800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.443854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.443868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.443926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.443941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.438 #17 NEW cov: 12415 ft: 13897 corp: 4/70b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 InsertByte- 00:06:37.438 [2024-11-26 18:56:54.503870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.503897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.503952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7e000000 cdw11:003d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.503966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.504018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.504031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.504085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.504098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.438 #18 NEW cov: 12500 ft: 14187 corp: 5/101b lim: 35 exec/s: 0 rss: 75Mb L: 31/31 MS: 1 InsertByte- 00:06:37.438 [2024-11-26 18:56:54.563989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.564014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.564068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:e0000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.438 [2024-11-26 18:56:54.564082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.438 [2024-11-26 18:56:54.564133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.439 [2024-11-26 18:56:54.564147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.439 [2024-11-26 18:56:54.564197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.439 [2024-11-26 18:56:54.564212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.439 #19 NEW cov: 12500 ft: 14321 corp: 6/131b lim: 35 exec/s: 0 rss: 75Mb L: 30/31 MS: 1 InsertByte- 00:06:37.439 [2024-11-26 18:56:54.604117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.439 [2024-11-26 18:56:54.604141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.439 [2024-11-26 18:56:54.604192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.439 [2024-11-26 18:56:54.604206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.439 [2024-11-26 18:56:54.604261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.439 [2024-11-26 18:56:54.604274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.439 [2024-11-26 18:56:54.604325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.439 [2024-11-26 18:56:54.604338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.439 #20 NEW cov: 12500 ft: 14546 corp: 7/161b lim: 35 exec/s: 0 rss: 75Mb L: 30/31 MS: 1 ChangeByte- 00:06:37.439 [2024-11-26 18:56:54.643913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a4c4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.439 [2024-11-26 18:56:54.643937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.697 #25 NEW cov: 12500 ft: 14607 corp: 8/169b lim: 35 exec/s: 0 rss: 75Mb L: 8/31 MS: 5 InsertByte-ChangeBit-ChangeBinInt-EraseBytes-CrossOver- 00:06:37.697 [2024-11-26 18:56:54.683882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0e4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.683905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.697 #26 NEW cov: 12500 ft: 14681 corp: 9/179b lim: 35 exec/s: 0 rss: 75Mb L: 10/31 MS: 1 ChangeBit- 00:06:37.697 [2024-11-26 18:56:54.744012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0e4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.744036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.697 #27 NEW cov: 12500 ft: 14722 corp: 10/189b lim: 35 exec/s: 0 rss: 75Mb L: 10/31 MS: 1 ChangeBinInt- 00:06:37.697 [2024-11-26 18:56:54.804192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a4e cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.804216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.697 #28 NEW cov: 12500 ft: 14763 corp: 11/199b lim: 35 exec/s: 0 rss: 75Mb L: 10/31 MS: 1 ChangeBit- 00:06:37.697 [2024-11-26 18:56:54.844792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.844816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.697 [2024-11-26 18:56:54.844871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.844885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.697 [2024-11-26 18:56:54.844936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.844948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.697 [2024-11-26 18:56:54.845002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.845015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.697 #29 NEW cov: 12500 ft: 14789 corp: 12/233b lim: 35 exec/s: 0 rss: 75Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:37.697 [2024-11-26 18:56:54.884450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.697 [2024-11-26 18:56:54.884482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.697 #30 NEW cov: 12500 ft: 14825 corp: 13/240b lim: 35 exec/s: 0 rss: 75Mb L: 7/34 MS: 1 EraseBytes- 00:06:37.956 [2024-11-26 18:56:54.924772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0e4a cdw11:cc000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:54.924798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:54.924854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:007e0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:54.924869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.956 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:37.956 #31 NEW cov: 12523 ft: 15115 corp: 14/256b lim: 35 exec/s: 0 rss: 75Mb L: 16/34 MS: 1 CrossOver- 00:06:37.956 [2024-11-26 18:56:54.985012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0e4a cdw11:cc000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:54.985037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:54.985092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:54.985105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:54.985158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:7e4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:54.985173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.956 #32 NEW cov: 12523 ft: 15330 corp: 15/278b lim: 35 exec/s: 0 rss: 75Mb L: 22/34 MS: 1 CopyPart- 00:06:37.956 [2024-11-26 18:56:55.045357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.045382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.045436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00005b00 cdw11:e0000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.045450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.045507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.045522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.045576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.045589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.956 #33 NEW cov: 12523 ft: 15365 corp: 16/308b lim: 35 exec/s: 33 rss: 75Mb L: 30/34 MS: 1 ChangeByte- 00:06:37.956 [2024-11-26 18:56:55.105341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.105370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.105424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00005b00 cdw11:e0000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.105437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.105494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.105508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.956 #34 NEW cov: 12523 ft: 15370 corp: 17/335b lim: 35 exec/s: 34 rss: 75Mb L: 27/34 MS: 1 EraseBytes- 00:06:37.956 [2024-11-26 18:56:55.165732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.165757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.165812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:007e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.165825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.165876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.165890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.956 [2024-11-26 18:56:55.165941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.956 [2024-11-26 18:56:55.165954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.215 #35 NEW cov: 12523 ft: 15433 corp: 18/365b lim: 35 exec/s: 35 rss: 75Mb L: 30/34 MS: 1 ShuffleBytes- 00:06:38.215 [2024-11-26 18:56:55.225813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.225838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.215 [2024-11-26 18:56:55.225892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.225906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.215 [2024-11-26 18:56:55.225959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.225972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.215 [2024-11-26 18:56:55.226026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.226045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.215 #36 NEW cov: 12523 ft: 15446 corp: 19/397b lim: 35 exec/s: 36 rss: 75Mb L: 32/34 MS: 1 CrossOver- 00:06:38.215 [2024-11-26 18:56:55.265658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a4a cdw11:96170001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.265686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.215 [2024-11-26 18:56:55.265741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8449ecc4 cdw11:004a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.265754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.215 #37 NEW cov: 12523 ft: 15530 corp: 20/415b lim: 35 exec/s: 37 rss: 75Mb L: 18/34 MS: 1 CMP- DE: "\226\027\267\354\304\204I\000"- 00:06:38.215 [2024-11-26 18:56:55.305586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a000a4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.305611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.215 #38 NEW cov: 12523 ft: 15573 corp: 21/422b lim: 35 exec/s: 38 rss: 75Mb L: 7/34 MS: 1 CrossOver- 00:06:38.215 [2024-11-26 18:56:55.365738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.365761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.215 #39 NEW cov: 12523 ft: 15625 corp: 22/432b lim: 35 exec/s: 39 rss: 75Mb L: 10/34 MS: 1 ShuffleBytes- 00:06:38.215 [2024-11-26 18:56:55.406340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005dcc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.406364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.215 [2024-11-26 18:56:55.406419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00005b00 cdw11:e0000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.406432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.215 [2024-11-26 18:56:55.406490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.406504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.215 [2024-11-26 18:56:55.406559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.215 [2024-11-26 18:56:55.406572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.473 #40 NEW cov: 12523 ft: 15643 corp: 23/462b lim: 35 exec/s: 40 rss: 75Mb L: 30/34 MS: 1 ChangeByte- 00:06:38.473 [2024-11-26 18:56:55.446234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:96170001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.473 [2024-11-26 18:56:55.446259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.473 [2024-11-26 18:56:55.446313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8449ecc4 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.446327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.446379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.446392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.474 #41 NEW cov: 12523 ft: 15693 corp: 24/489b lim: 35 exec/s: 41 rss: 75Mb L: 27/34 MS: 1 PersAutoDict- DE: "\226\027\267\354\304\204I\000"- 00:06:38.474 [2024-11-26 18:56:55.506567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:002b4acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.506591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.506645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.506659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.506709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.506723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.506777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.506795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.474 #42 NEW cov: 12523 ft: 15697 corp: 25/521b lim: 35 exec/s: 42 rss: 75Mb L: 32/34 MS: 1 ChangeByte- 00:06:38.474 [2024-11-26 18:56:55.566758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.566782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.566836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.566849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.566903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.566916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.566966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.566979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.474 #43 NEW cov: 12523 ft: 15702 corp: 26/553b lim: 35 exec/s: 43 rss: 75Mb L: 32/34 MS: 1 ChangeBit- 00:06:38.474 [2024-11-26 18:56:55.606877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.606901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.606956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:7e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.606970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.607022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.607035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.474 [2024-11-26 18:56:55.607091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.607104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.474 #44 NEW cov: 12523 ft: 15716 corp: 27/587b lim: 35 exec/s: 44 rss: 76Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:38.474 [2024-11-26 18:56:55.666565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.474 [2024-11-26 18:56:55.666589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.733 #45 NEW cov: 12523 ft: 15766 corp: 28/597b lim: 35 exec/s: 45 rss: 76Mb L: 10/34 MS: 1 ChangeByte- 00:06:38.733 [2024-11-26 18:56:55.707005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6b6b4c6b cdw11:6b6b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.707029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.733 [2024-11-26 18:56:55.707082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6b6b6b6b cdw11:6b6b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.707096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.733 [2024-11-26 18:56:55.707146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4a4a6b6b cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.707159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.733 #47 NEW cov: 12523 ft: 15812 corp: 29/618b lim: 35 exec/s: 47 rss: 76Mb L: 21/34 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:38.733 [2024-11-26 18:56:55.767303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:17b74a96 cdw11:ecc40001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.767327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.733 [2024-11-26 18:56:55.767381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:007e4900 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.767394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.733 [2024-11-26 18:56:55.767447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.767461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.733 [2024-11-26 18:56:55.767518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.767531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.733 #48 NEW cov: 12523 ft: 15836 corp: 30/648b lim: 35 exec/s: 48 rss: 76Mb L: 30/34 MS: 1 PersAutoDict- DE: "\226\027\267\354\304\204I\000"- 00:06:38.733 [2024-11-26 18:56:55.827331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0e4a cdw11:cc000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.827355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.733 [2024-11-26 18:56:55.827409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.827426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.733 [2024-11-26 18:56:55.827483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:7e4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.733 [2024-11-26 18:56:55.827497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.734 #49 NEW cov: 12523 ft: 15852 corp: 31/670b lim: 35 exec/s: 49 rss: 76Mb L: 22/34 MS: 1 ChangeBinInt- 00:06:38.734 [2024-11-26 18:56:55.887649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004acc cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.734 [2024-11-26 18:56:55.887673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.734 [2024-11-26 18:56:55.887728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:007e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.734 [2024-11-26 18:56:55.887742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.734 [2024-11-26 18:56:55.887795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000b600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.734 [2024-11-26 18:56:55.887808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.734 [2024-11-26 18:56:55.887862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.734 [2024-11-26 18:56:55.887875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.734 #50 NEW cov: 12523 ft: 15859 corp: 32/701b lim: 35 exec/s: 50 rss: 76Mb L: 31/34 MS: 1 InsertByte- 00:06:38.734 [2024-11-26 18:56:55.927340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a54 cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.734 [2024-11-26 18:56:55.927364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.993 #51 NEW cov: 12523 ft: 15883 corp: 33/711b lim: 35 exec/s: 51 rss: 76Mb L: 10/34 MS: 1 ChangeBinInt- 00:06:38.993 [2024-11-26 18:56:55.987640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a000e4a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.993 [2024-11-26 18:56:55.987664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.993 [2024-11-26 18:56:55.987718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:4a4a0000 cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.993 [2024-11-26 18:56:55.987732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.993 #52 NEW cov: 12523 ft: 15944 corp: 34/727b lim: 35 exec/s: 52 rss: 76Mb L: 16/34 MS: 1 InsertRepeatedBytes- 00:06:38.993 [2024-11-26 18:56:56.027591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4a4a0a4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.993 [2024-11-26 18:56:56.027616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.993 #53 NEW cov: 12523 ft: 15954 corp: 35/737b lim: 35 exec/s: 26 rss: 76Mb L: 10/34 MS: 1 ChangeASCIIInt- 00:06:38.993 #53 DONE cov: 12523 ft: 15954 corp: 35/737b lim: 35 exec/s: 26 rss: 76Mb 00:06:38.993 ###### Recommended dictionary. ###### 00:06:38.993 "\226\027\267\354\304\204I\000" # Uses: 2 00:06:38.993 ###### End of recommended dictionary. ###### 00:06:38.993 Done 53 runs in 2 second(s) 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.993 18:56:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:39.254 [2024-11-26 18:56:56.224370] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:39.254 [2024-11-26 18:56:56.224437] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738192 ] 00:06:39.254 [2024-11-26 18:56:56.413651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.254 [2024-11-26 18:56:56.452334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.514 [2024-11-26 18:56:56.511564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.514 [2024-11-26 18:56:56.527708] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:39.514 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.514 INFO: Seed: 124253311 00:06:39.514 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:39.514 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:39.514 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:39.514 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.514 #2 INITED exec/s: 0 rss: 67Mb 00:06:39.514 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.514 This may also happen if the target rejected all inputs we tried so far 00:06:39.514 [2024-11-26 18:56:56.577220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.514 [2024-11-26 18:56:56.577249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.514 [2024-11-26 18:56:56.577316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.514 [2024-11-26 18:56:56.577340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.772 NEW_FUNC[1/717]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:39.772 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.772 #5 NEW cov: 12282 ft: 12306 corp: 2/24b lim: 45 exec/s: 0 rss: 74Mb L: 23/23 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:06:39.772 [2024-11-26 18:56:56.918050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.772 [2024-11-26 18:56:56.918086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.772 [2024-11-26 18:56:56.918154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.772 [2024-11-26 18:56:56.918172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.772 #6 NEW cov: 12420 ft: 12911 corp: 3/47b lim: 45 exec/s: 0 rss: 75Mb L: 23/23 MS: 1 ShuffleBytes- 00:06:39.772 [2024-11-26 18:56:56.978127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.772 [2024-11-26 18:56:56.978155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.772 [2024-11-26 18:56:56.978220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.772 [2024-11-26 18:56:56.978239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.031 #12 NEW cov: 12426 ft: 13208 corp: 4/70b lim: 45 exec/s: 0 rss: 75Mb L: 23/23 MS: 1 ShuffleBytes- 00:06:40.031 [2024-11-26 18:56:57.038302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.031 [2024-11-26 18:56:57.038329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.031 [2024-11-26 18:56:57.038395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.031 [2024-11-26 18:56:57.038414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.031 #13 NEW cov: 12511 ft: 13544 corp: 5/93b lim: 45 exec/s: 0 rss: 75Mb L: 23/23 MS: 1 ChangeBit- 00:06:40.031 [2024-11-26 18:56:57.078189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.031 [2024-11-26 18:56:57.078215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.031 #14 NEW cov: 12511 ft: 14300 corp: 6/107b lim: 45 exec/s: 0 rss: 75Mb L: 14/23 MS: 1 EraseBytes- 00:06:40.031 [2024-11-26 18:56:57.118483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.031 [2024-11-26 18:56:57.118509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.031 [2024-11-26 18:56:57.118577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.031 [2024-11-26 18:56:57.118600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.031 #15 NEW cov: 12511 ft: 14404 corp: 7/131b lim: 45 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 InsertByte- 00:06:40.031 [2024-11-26 18:56:57.158453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:84c50049 cdw11:ef8c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.031 [2024-11-26 18:56:57.158484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.031 #17 NEW cov: 12511 ft: 14487 corp: 8/140b lim: 45 exec/s: 0 rss: 75Mb L: 9/24 MS: 2 CopyPart-CMP- DE: "\000I\204\305\357\214n\\"- 00:06:40.031 [2024-11-26 18:56:57.198553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.031 [2024-11-26 18:56:57.198580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.031 #18 NEW cov: 12511 ft: 14545 corp: 9/154b lim: 45 exec/s: 0 rss: 75Mb L: 14/24 MS: 1 ShuffleBytes- 00:06:40.290 [2024-11-26 18:56:57.258908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00871218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.258934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.290 [2024-11-26 18:56:57.259001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.259020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.290 #24 NEW cov: 12511 ft: 14666 corp: 10/178b lim: 45 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBinInt- 00:06:40.290 [2024-11-26 18:56:57.318892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.318918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.290 #25 NEW cov: 12511 ft: 14739 corp: 11/194b lim: 45 exec/s: 0 rss: 75Mb L: 16/24 MS: 1 EraseBytes- 00:06:40.290 [2024-11-26 18:56:57.379227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00e41218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.379254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.290 [2024-11-26 18:56:57.379320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.379340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.290 #26 NEW cov: 12511 ft: 14760 corp: 12/218b lim: 45 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeByte- 00:06:40.290 [2024-11-26 18:56:57.439382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.439409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.290 [2024-11-26 18:56:57.439482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.439501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.290 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:40.290 #27 NEW cov: 12534 ft: 14796 corp: 13/242b lim: 45 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ShuffleBytes- 00:06:40.290 [2024-11-26 18:56:57.479785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00871218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.479811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.290 [2024-11-26 18:56:57.479876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.479895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.290 [2024-11-26 18:56:57.479961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.479980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.290 [2024-11-26 18:56:57.480059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.290 [2024-11-26 18:56:57.480080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.549 #28 NEW cov: 12534 ft: 15185 corp: 14/281b lim: 45 exec/s: 0 rss: 75Mb L: 39/39 MS: 1 CopyPart- 00:06:40.549 [2024-11-26 18:56:57.519576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:a7870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.549 [2024-11-26 18:56:57.519603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.549 [2024-11-26 18:56:57.519669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.549 [2024-11-26 18:56:57.519690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.549 #29 NEW cov: 12534 ft: 15232 corp: 15/304b lim: 45 exec/s: 29 rss: 75Mb L: 23/39 MS: 1 ShuffleBytes- 00:06:40.549 [2024-11-26 18:56:57.580112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00871218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.549 [2024-11-26 18:56:57.580140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.549 [2024-11-26 18:56:57.580204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.549 [2024-11-26 18:56:57.580223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.549 [2024-11-26 18:56:57.580290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.549 [2024-11-26 18:56:57.580307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.549 [2024-11-26 18:56:57.580371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.549 [2024-11-26 18:56:57.580386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.549 #30 NEW cov: 12534 ft: 15253 corp: 16/343b lim: 45 exec/s: 30 rss: 75Mb L: 39/39 MS: 1 ShuffleBytes- 00:06:40.549 [2024-11-26 18:56:57.639942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:a7870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.550 [2024-11-26 18:56:57.639969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.550 [2024-11-26 18:56:57.640040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.550 [2024-11-26 18:56:57.640059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.550 #31 NEW cov: 12534 ft: 15270 corp: 17/366b lim: 45 exec/s: 31 rss: 76Mb L: 23/39 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:40.550 [2024-11-26 18:56:57.700120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00871218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.550 [2024-11-26 18:56:57.700148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.550 [2024-11-26 18:56:57.700214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.550 [2024-11-26 18:56:57.700233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.550 #32 NEW cov: 12534 ft: 15286 corp: 18/390b lim: 45 exec/s: 32 rss: 76Mb L: 24/39 MS: 1 ChangeBit- 00:06:40.550 [2024-11-26 18:56:57.740038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.550 [2024-11-26 18:56:57.740065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.809 #33 NEW cov: 12534 ft: 15319 corp: 19/404b lim: 45 exec/s: 33 rss: 76Mb L: 14/39 MS: 1 ChangeBit- 00:06:40.809 [2024-11-26 18:56:57.800386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00e41218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.800412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:57.800481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.800516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.809 #34 NEW cov: 12534 ft: 15342 corp: 20/428b lim: 45 exec/s: 34 rss: 76Mb L: 24/39 MS: 1 ChangeByte- 00:06:40.809 [2024-11-26 18:56:57.860584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00e41218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.860611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:57.860677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.860696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.809 #35 NEW cov: 12534 ft: 15365 corp: 21/452b lim: 45 exec/s: 35 rss: 76Mb L: 24/39 MS: 1 ChangeBit- 00:06:40.809 [2024-11-26 18:56:57.921064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00871218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.921092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:57.921156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.921175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:57.921240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.921261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:57.921325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.921341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.809 #36 NEW cov: 12534 ft: 15370 corp: 22/491b lim: 45 exec/s: 36 rss: 76Mb L: 39/39 MS: 1 ShuffleBytes- 00:06:40.809 [2024-11-26 18:56:57.960820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00e41218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.960847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:57.960913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:85878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:57.960932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.809 #37 NEW cov: 12534 ft: 15381 corp: 23/515b lim: 45 exec/s: 37 rss: 76Mb L: 24/39 MS: 1 ChangeBit- 00:06:40.809 [2024-11-26 18:56:58.001279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00871218 cdw11:8f870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:58.001304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:58.001371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:58.001392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:58.001458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:58.001485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.809 [2024-11-26 18:56:58.001570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.809 [2024-11-26 18:56:58.001588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.068 #38 NEW cov: 12534 ft: 15401 corp: 24/554b lim: 45 exec/s: 38 rss: 76Mb L: 39/39 MS: 1 ChangeBinInt- 00:06:41.068 [2024-11-26 18:56:58.061117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.061144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.068 [2024-11-26 18:56:58.061209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.061229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.068 #39 NEW cov: 12534 ft: 15428 corp: 25/577b lim: 45 exec/s: 39 rss: 76Mb L: 23/39 MS: 1 ShuffleBytes- 00:06:41.068 [2024-11-26 18:56:58.101196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00e41218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.101222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.068 [2024-11-26 18:56:58.101288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.101309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.068 #40 NEW cov: 12534 ft: 15430 corp: 26/601b lim: 45 exec/s: 40 rss: 76Mb L: 24/39 MS: 1 ChangeByte- 00:06:41.068 [2024-11-26 18:56:58.141172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.141198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.068 #41 NEW cov: 12534 ft: 15440 corp: 27/615b lim: 45 exec/s: 41 rss: 76Mb L: 14/39 MS: 1 ChangeByte- 00:06:41.068 [2024-11-26 18:56:58.181425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.181451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.068 [2024-11-26 18:56:58.181524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.181544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.068 #47 NEW cov: 12534 ft: 15467 corp: 28/634b lim: 45 exec/s: 47 rss: 76Mb L: 19/39 MS: 1 EraseBytes- 00:06:41.068 [2024-11-26 18:56:58.221901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00871218 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.221928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.068 [2024-11-26 18:56:58.221995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.222013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.068 [2024-11-26 18:56:58.222079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:75878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.222097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.068 [2024-11-26 18:56:58.222165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.068 [2024-11-26 18:56:58.222182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.068 #48 NEW cov: 12534 ft: 15495 corp: 29/674b lim: 45 exec/s: 48 rss: 76Mb L: 40/40 MS: 1 InsertByte- 00:06:41.326 [2024-11-26 18:56:58.281781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:12871887 cdw11:00870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.281808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.326 [2024-11-26 18:56:58.281878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.281899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.326 #49 NEW cov: 12534 ft: 15511 corp: 30/698b lim: 45 exec/s: 49 rss: 77Mb L: 24/40 MS: 1 ShuffleBytes- 00:06:41.326 [2024-11-26 18:56:58.321853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87008787 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.321884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.326 [2024-11-26 18:56:58.321952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.321972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.326 #50 NEW cov: 12534 ft: 15554 corp: 31/716b lim: 45 exec/s: 50 rss: 77Mb L: 18/40 MS: 1 CMP- DE: "\000\000\000w"- 00:06:41.326 [2024-11-26 18:56:58.361932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:a7870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.361958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.326 [2024-11-26 18:56:58.362026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00008701 cdw11:00870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.362044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.326 #51 NEW cov: 12534 ft: 15568 corp: 32/740b lim: 45 exec/s: 51 rss: 77Mb L: 24/40 MS: 1 InsertByte- 00:06:41.326 [2024-11-26 18:56:58.422525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87008787 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.422552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.326 [2024-11-26 18:56:58.422619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76768776 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.422638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.326 [2024-11-26 18:56:58.422705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.422727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.326 [2024-11-26 18:56:58.422793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.422809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.326 #52 NEW cov: 12534 ft: 15583 corp: 33/779b lim: 45 exec/s: 52 rss: 77Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:06:41.326 [2024-11-26 18:56:58.482329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.482355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.326 [2024-11-26 18:56:58.482421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.326 [2024-11-26 18:56:58.482440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.326 #53 NEW cov: 12534 ft: 15587 corp: 34/798b lim: 45 exec/s: 53 rss: 77Mb L: 19/40 MS: 1 ShuffleBytes- 00:06:41.584 [2024-11-26 18:56:58.542614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:87871287 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.584 [2024-11-26 18:56:58.542641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.584 [2024-11-26 18:56:58.542709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:87878787 cdw11:87870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.584 [2024-11-26 18:56:58.542733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.584 [2024-11-26 18:56:58.542798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00008701 cdw11:00870004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.584 [2024-11-26 18:56:58.542819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.584 #54 NEW cov: 12534 ft: 15810 corp: 35/825b lim: 45 exec/s: 27 rss: 77Mb L: 27/40 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:41.584 #54 DONE cov: 12534 ft: 15810 corp: 35/825b lim: 45 exec/s: 27 rss: 77Mb 00:06:41.584 ###### Recommended dictionary. ###### 00:06:41.584 "\000I\204\305\357\214n\\" # Uses: 0 00:06:41.584 "\001\000\000\000" # Uses: 1 00:06:41.584 "\000\000\000w" # Uses: 0 00:06:41.584 ###### End of recommended dictionary. ###### 00:06:41.584 Done 54 runs in 2 second(s) 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.584 18:56:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:41.584 [2024-11-26 18:56:58.718977] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:41.584 [2024-11-26 18:56:58.719053] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738545 ] 00:06:41.843 [2024-11-26 18:56:58.906550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.843 [2024-11-26 18:56:58.945208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.843 [2024-11-26 18:56:59.004234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.843 [2024-11-26 18:56:59.020384] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:41.843 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.843 INFO: Seed: 2618252801 00:06:41.843 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:41.843 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:41.843 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:41.843 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.843 #2 INITED exec/s: 0 rss: 67Mb 00:06:41.843 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.843 This may also happen if the target rejected all inputs we tried so far 00:06:42.101 [2024-11-26 18:56:59.075921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.101 [2024-11-26 18:56:59.075952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.359 NEW_FUNC[1/715]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:42.359 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.359 #3 NEW cov: 12204 ft: 12222 corp: 2/3b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 CrossOver- 00:06:42.359 [2024-11-26 18:56:59.416865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fd7a cdw11:00000000 00:06:42.359 [2024-11-26 18:56:59.416909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.359 #6 NEW cov: 12337 ft: 12941 corp: 3/5b lim: 10 exec/s: 0 rss: 75Mb L: 2/2 MS: 3 ChangeByte-ChangeByte-InsertByte- 00:06:42.359 [2024-11-26 18:56:59.456846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.359 [2024-11-26 18:56:59.456874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.359 #7 NEW cov: 12343 ft: 13103 corp: 4/7b lim: 10 exec/s: 0 rss: 75Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:42.359 [2024-11-26 18:56:59.517014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1d cdw11:00000000 00:06:42.359 [2024-11-26 18:56:59.517041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.359 #8 NEW cov: 12428 ft: 13351 corp: 5/9b lim: 10 exec/s: 0 rss: 75Mb L: 2/2 MS: 1 InsertByte- 00:06:42.359 [2024-11-26 18:56:59.557108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.359 [2024-11-26 18:56:59.557135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.618 #9 NEW cov: 12428 ft: 13429 corp: 6/12b lim: 10 exec/s: 0 rss: 75Mb L: 3/3 MS: 1 CopyPart- 00:06:42.618 [2024-11-26 18:56:59.597246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a85 cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.597272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.618 #10 NEW cov: 12428 ft: 13521 corp: 7/14b lim: 10 exec/s: 0 rss: 75Mb L: 2/3 MS: 1 ChangeByte- 00:06:42.618 [2024-11-26 18:56:59.657396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a32 cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.657422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.618 #11 NEW cov: 12428 ft: 13627 corp: 8/16b lim: 10 exec/s: 0 rss: 75Mb L: 2/3 MS: 1 ChangeByte- 00:06:42.618 [2024-11-26 18:56:59.717731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.717756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.618 [2024-11-26 18:56:59.717840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.717860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.618 #12 NEW cov: 12428 ft: 13894 corp: 9/20b lim: 10 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:42.618 [2024-11-26 18:56:59.757715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.757741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.618 #13 NEW cov: 12428 ft: 13918 corp: 10/22b lim: 10 exec/s: 0 rss: 75Mb L: 2/4 MS: 1 ChangeByte- 00:06:42.618 [2024-11-26 18:56:59.798062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a53 cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.798088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.618 [2024-11-26 18:56:59.798153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005353 cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.798172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.618 [2024-11-26 18:56:59.798236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005385 cdw11:00000000 00:06:42.618 [2024-11-26 18:56:59.798254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.877 #14 NEW cov: 12428 ft: 14190 corp: 11/28b lim: 10 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:42.877 [2024-11-26 18:56:59.858032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 00:06:42.877 [2024-11-26 18:56:59.858057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.877 #15 NEW cov: 12428 ft: 14276 corp: 12/31b lim: 10 exec/s: 0 rss: 75Mb L: 3/6 MS: 1 ChangeBinInt- 00:06:42.877 [2024-11-26 18:56:59.918282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:42.877 [2024-11-26 18:56:59.918307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:56:59.918371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.877 [2024-11-26 18:56:59.918388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.877 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:42.877 #16 NEW cov: 12451 ft: 14306 corp: 13/35b lim: 10 exec/s: 0 rss: 75Mb L: 4/6 MS: 1 ShuffleBytes- 00:06:42.877 [2024-11-26 18:56:59.978548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:06:42.877 [2024-11-26 18:56:59.978575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:56:59.978638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:42.877 [2024-11-26 18:56:59.978658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:56:59.978722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:42.877 [2024-11-26 18:56:59.978741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.877 #17 NEW cov: 12451 ft: 14315 corp: 14/41b lim: 10 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:42.877 [2024-11-26 18:57:00.019024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a53 cdw11:00000000 00:06:42.877 [2024-11-26 18:57:00.019052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:57:00.019117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005300 cdw11:00000000 00:06:42.877 [2024-11-26 18:57:00.019136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:57:00.019202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.877 [2024-11-26 18:57:00.019221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:57:00.019285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000053 cdw11:00000000 00:06:42.877 [2024-11-26 18:57:00.019303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:57:00.019368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005385 cdw11:00000000 00:06:42.877 [2024-11-26 18:57:00.019385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.877 #18 NEW cov: 12451 ft: 14572 corp: 15/51b lim: 10 exec/s: 18 rss: 75Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:42.877 [2024-11-26 18:57:00.078851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 00:06:42.877 [2024-11-26 18:57:00.078886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.877 [2024-11-26 18:57:00.078949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a03 cdw11:00000000 00:06:42.877 [2024-11-26 18:57:00.078968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.137 #19 NEW cov: 12451 ft: 14590 corp: 16/56b lim: 10 exec/s: 19 rss: 75Mb L: 5/10 MS: 1 CopyPart- 00:06:43.137 [2024-11-26 18:57:00.139077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1d cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.139106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.139170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bebe cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.139189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.139253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bebe cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.139272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.137 #20 NEW cov: 12451 ft: 14638 corp: 17/63b lim: 10 exec/s: 20 rss: 75Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:06:43.137 [2024-11-26 18:57:00.179228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.179255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.179320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.179340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.179407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.179428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.179496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.179516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.137 #22 NEW cov: 12451 ft: 14670 corp: 18/72b lim: 10 exec/s: 22 rss: 75Mb L: 9/10 MS: 2 ChangeBit-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:43.137 [2024-11-26 18:57:00.219395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.219422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.219488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.219508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.219571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.219591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.219653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.219669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.137 #23 NEW cov: 12451 ft: 14686 corp: 19/81b lim: 10 exec/s: 23 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:43.137 [2024-11-26 18:57:00.259553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.259581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.137 [2024-11-26 18:57:00.259648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.137 [2024-11-26 18:57:00.259667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.138 [2024-11-26 18:57:00.259732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.138 [2024-11-26 18:57:00.259752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.138 [2024-11-26 18:57:00.259818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.138 [2024-11-26 18:57:00.259838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.138 #24 NEW cov: 12451 ft: 14756 corp: 20/90b lim: 10 exec/s: 24 rss: 76Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:43.138 [2024-11-26 18:57:00.319785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:06:43.138 [2024-11-26 18:57:00.319813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.138 [2024-11-26 18:57:00.319877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003a3a cdw11:00000000 00:06:43.138 [2024-11-26 18:57:00.319897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.138 [2024-11-26 18:57:00.319960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003a3a cdw11:00000000 00:06:43.138 [2024-11-26 18:57:00.319985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.138 [2024-11-26 18:57:00.320050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003a0a cdw11:00000000 00:06:43.138 [2024-11-26 18:57:00.320067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.138 #25 NEW cov: 12451 ft: 14781 corp: 21/98b lim: 10 exec/s: 25 rss: 76Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:06:43.397 [2024-11-26 18:57:00.359658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.359685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.359760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.359778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.359841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002fff cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.359861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.397 #26 NEW cov: 12451 ft: 14877 corp: 22/104b lim: 10 exec/s: 26 rss: 76Mb L: 6/10 MS: 1 CrossOver- 00:06:43.397 [2024-11-26 18:57:00.399523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.399552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.397 #27 NEW cov: 12451 ft: 14890 corp: 23/106b lim: 10 exec/s: 27 rss: 76Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:43.397 [2024-11-26 18:57:00.440012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.440040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.440102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.440122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.440187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.440207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.440271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.440287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.397 #28 NEW cov: 12451 ft: 14906 corp: 24/115b lim: 10 exec/s: 28 rss: 76Mb L: 9/10 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:43.397 [2024-11-26 18:57:00.480245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a53 cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.480272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.480337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005300 cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.480357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.480426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.397 [2024-11-26 18:57:00.480446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.397 [2024-11-26 18:57:00.480518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000453 cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.480539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.398 [2024-11-26 18:57:00.480603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005385 cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.480619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.398 #29 NEW cov: 12451 ft: 14926 corp: 25/125b lim: 10 exec/s: 29 rss: 76Mb L: 10/10 MS: 1 ChangeBit- 00:06:43.398 [2024-11-26 18:57:00.540252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.540279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.398 [2024-11-26 18:57:00.540343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.540362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.398 [2024-11-26 18:57:00.540428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.540447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.398 #30 NEW cov: 12451 ft: 15013 corp: 26/131b lim: 10 exec/s: 30 rss: 76Mb L: 6/10 MS: 1 CopyPart- 00:06:43.398 [2024-11-26 18:57:00.580550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.580578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.398 [2024-11-26 18:57:00.580642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.580662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.398 [2024-11-26 18:57:00.580727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.580744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.398 [2024-11-26 18:57:00.580810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.580827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.398 [2024-11-26 18:57:00.580891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.398 [2024-11-26 18:57:00.580907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.657 #31 NEW cov: 12451 ft: 15049 corp: 27/141b lim: 10 exec/s: 31 rss: 76Mb L: 10/10 MS: 1 CopyPart- 00:06:43.657 [2024-11-26 18:57:00.640722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.640748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.640814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.640836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.640902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a40 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.640921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.640987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.641003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.641065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.641081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.657 #32 NEW cov: 12451 ft: 15094 corp: 28/151b lim: 10 exec/s: 32 rss: 76Mb L: 10/10 MS: 1 ChangeBit- 00:06:43.657 [2024-11-26 18:57:00.700586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.700613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.700678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.700697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.700762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.700779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.657 #33 NEW cov: 12451 ft: 15114 corp: 29/158b lim: 10 exec/s: 33 rss: 76Mb L: 7/10 MS: 1 CopyPart- 00:06:43.657 [2024-11-26 18:57:00.740448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.740479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.657 #34 NEW cov: 12451 ft: 15134 corp: 30/161b lim: 10 exec/s: 34 rss: 76Mb L: 3/10 MS: 1 CrossOver- 00:06:43.657 [2024-11-26 18:57:00.801140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.801166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.801231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.801251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.801316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.801336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.801417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.801435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.801508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000001d cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.801527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.657 #35 NEW cov: 12451 ft: 15172 corp: 31/171b lim: 10 exec/s: 35 rss: 76Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:43.657 [2024-11-26 18:57:00.840951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.840977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.841043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002e2f cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.841062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.657 [2024-11-26 18:57:00.841127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.657 [2024-11-26 18:57:00.841144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.917 #36 NEW cov: 12451 ft: 15180 corp: 32/177b lim: 10 exec/s: 36 rss: 76Mb L: 6/10 MS: 1 ChangeBit- 00:06:43.917 [2024-11-26 18:57:00.901352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:43.917 [2024-11-26 18:57:00.901379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.917 [2024-11-26 18:57:00.901445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fa00 cdw11:00000000 00:06:43.917 [2024-11-26 18:57:00.901464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.917 [2024-11-26 18:57:00.901536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.917 [2024-11-26 18:57:00.901557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.917 [2024-11-26 18:57:00.901621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002f2f cdw11:00000000 00:06:43.917 [2024-11-26 18:57:00.901638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.917 #37 NEW cov: 12451 ft: 15212 corp: 33/186b lim: 10 exec/s: 37 rss: 77Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:43.917 [2024-11-26 18:57:00.961103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005b1d cdw11:00000000 00:06:43.917 [2024-11-26 18:57:00.961129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.917 #38 NEW cov: 12451 ft: 15213 corp: 34/188b lim: 10 exec/s: 38 rss: 77Mb L: 2/10 MS: 1 ChangeByte- 00:06:43.917 [2024-11-26 18:57:01.001169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.917 [2024-11-26 18:57:01.001195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.917 #39 NEW cov: 12451 ft: 15237 corp: 35/191b lim: 10 exec/s: 39 rss: 77Mb L: 3/10 MS: 1 CopyPart- 00:06:43.917 [2024-11-26 18:57:01.061617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a53 cdw11:00000000 00:06:43.917 [2024-11-26 18:57:01.061643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.917 [2024-11-26 18:57:01.061708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005353 cdw11:00000000 00:06:43.917 [2024-11-26 18:57:01.061728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.917 [2024-11-26 18:57:01.061794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005185 cdw11:00000000 00:06:43.917 [2024-11-26 18:57:01.061817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.917 #40 NEW cov: 12451 ft: 15256 corp: 36/197b lim: 10 exec/s: 20 rss: 77Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:43.917 #40 DONE cov: 12451 ft: 15256 corp: 36/197b lim: 10 exec/s: 20 rss: 77Mb 00:06:43.917 ###### Recommended dictionary. ###### 00:06:43.917 "\377\377\377\377\377\377\377\377" # Uses: 1 00:06:43.917 ###### End of recommended dictionary. ###### 00:06:43.917 Done 40 runs in 2 second(s) 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.176 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.177 18:57:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:44.177 [2024-11-26 18:57:01.236763] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:44.177 [2024-11-26 18:57:01.236832] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738979 ] 00:06:44.436 [2024-11-26 18:57:01.429422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.436 [2024-11-26 18:57:01.468383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.436 [2024-11-26 18:57:01.527437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.436 [2024-11-26 18:57:01.543588] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:44.436 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.436 INFO: Seed: 845793697 00:06:44.436 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:44.436 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:44.436 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:44.436 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.436 #2 INITED exec/s: 0 rss: 67Mb 00:06:44.436 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.436 This may also happen if the target rejected all inputs we tried so far 00:06:44.436 [2024-11-26 18:57:01.611555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.436 [2024-11-26 18:57:01.611594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.436 [2024-11-26 18:57:01.611678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.436 [2024-11-26 18:57:01.611694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.436 [2024-11-26 18:57:01.611776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:44.436 [2024-11-26 18:57:01.611793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.953 NEW_FUNC[1/715]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:44.953 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.953 #3 NEW cov: 12224 ft: 12225 corp: 2/7b lim: 10 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:44.953 [2024-11-26 18:57:01.972847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.953 [2024-11-26 18:57:01.972894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.953 [2024-11-26 18:57:01.972979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.953 [2024-11-26 18:57:01.972995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.953 [2024-11-26 18:57:01.973077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:44.953 [2024-11-26 18:57:01.973092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.953 #4 NEW cov: 12337 ft: 12774 corp: 3/14b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 CrossOver- 00:06:44.953 [2024-11-26 18:57:02.042957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.953 [2024-11-26 18:57:02.042984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.953 [2024-11-26 18:57:02.043071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.953 [2024-11-26 18:57:02.043087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.953 [2024-11-26 18:57:02.043172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:44.953 [2024-11-26 18:57:02.043188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.953 #5 NEW cov: 12343 ft: 13056 corp: 4/21b lim: 10 exec/s: 0 rss: 75Mb L: 7/7 MS: 1 ChangeByte- 00:06:44.953 [2024-11-26 18:57:02.113253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.954 [2024-11-26 18:57:02.113278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.954 [2024-11-26 18:57:02.113366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:44.954 [2024-11-26 18:57:02.113381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.954 [2024-11-26 18:57:02.113466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000af8 cdw11:00000000 00:06:44.954 [2024-11-26 18:57:02.113488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.954 #6 NEW cov: 12428 ft: 13346 corp: 5/28b lim: 10 exec/s: 0 rss: 75Mb L: 7/7 MS: 1 ChangeBit- 00:06:45.211 [2024-11-26 18:57:02.184017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.211 [2024-11-26 18:57:02.184044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.211 [2024-11-26 18:57:02.184130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.211 [2024-11-26 18:57:02.184145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.211 [2024-11-26 18:57:02.184228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:45.211 [2024-11-26 18:57:02.184244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.211 [2024-11-26 18:57:02.184335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:45.211 [2024-11-26 18:57:02.184351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.211 #7 NEW cov: 12428 ft: 13693 corp: 6/36b lim: 10 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 InsertByte- 00:06:45.211 [2024-11-26 18:57:02.234092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.234117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.212 [2024-11-26 18:57:02.234203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.234220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.212 [2024-11-26 18:57:02.234310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.234326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.212 [2024-11-26 18:57:02.234411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.234429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.212 #8 NEW cov: 12428 ft: 13713 corp: 7/44b lim: 10 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:06:45.212 [2024-11-26 18:57:02.304631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.304657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.212 [2024-11-26 18:57:02.304747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8dd cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.304763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.212 [2024-11-26 18:57:02.304852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e83d cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.304870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.212 [2024-11-26 18:57:02.304959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.304977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.212 #9 NEW cov: 12428 ft: 13774 corp: 8/53b lim: 10 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 InsertByte- 00:06:45.212 [2024-11-26 18:57:02.374261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.374289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.212 [2024-11-26 18:57:02.374375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.212 [2024-11-26 18:57:02.374391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.212 #10 NEW cov: 12428 ft: 14006 corp: 9/58b lim: 10 exec/s: 0 rss: 75Mb L: 5/9 MS: 1 EraseBytes- 00:06:45.471 [2024-11-26 18:57:02.425237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.425264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.425356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.425375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.425465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.425488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.425573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e821 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.425591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.472 #11 NEW cov: 12428 ft: 14053 corp: 10/66b lim: 10 exec/s: 0 rss: 75Mb L: 8/9 MS: 1 CrossOver- 00:06:45.472 [2024-11-26 18:57:02.475619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.475644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.475698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.475713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.475779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.475794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.475878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.475894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.475976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000e821 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.475992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.472 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:45.472 #12 NEW cov: 12451 ft: 14229 corp: 11/76b lim: 10 exec/s: 0 rss: 75Mb L: 10/10 MS: 1 CrossOver- 00:06:45.472 [2024-11-26 18:57:02.545673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.545698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.545751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000068e8 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.545767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.545844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.545860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.545935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e821 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.545951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.472 #13 NEW cov: 12451 ft: 14279 corp: 12/84b lim: 10 exec/s: 0 rss: 75Mb L: 8/10 MS: 1 ChangeBit- 00:06:45.472 [2024-11-26 18:57:02.595906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.595933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.595988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8dd cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.596004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.596077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e83d cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.596094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.596164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.596180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.472 #14 NEW cov: 12451 ft: 14295 corp: 13/92b lim: 10 exec/s: 14 rss: 75Mb L: 8/10 MS: 1 CrossOver- 00:06:45.472 [2024-11-26 18:57:02.646243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.646268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.646321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.646335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.472 [2024-11-26 18:57:02.646411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:45.472 [2024-11-26 18:57:02.646427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.472 #15 NEW cov: 12451 ft: 14328 corp: 14/99b lim: 10 exec/s: 15 rss: 75Mb L: 7/10 MS: 1 ChangeBit- 00:06:45.731 [2024-11-26 18:57:02.696635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.696662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.696719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.696740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.696820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000aa4 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.696837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.731 #16 NEW cov: 12451 ft: 14367 corp: 15/106b lim: 10 exec/s: 16 rss: 75Mb L: 7/10 MS: 1 ChangeByte- 00:06:45.731 [2024-11-26 18:57:02.766916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.766945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.767000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.767017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.767093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a401 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.767111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.731 #17 NEW cov: 12451 ft: 14389 corp: 16/112b lim: 10 exec/s: 17 rss: 75Mb L: 6/10 MS: 1 EraseBytes- 00:06:45.731 [2024-11-26 18:57:02.837369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.837397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.837446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.837461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.837540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.837557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.731 #18 NEW cov: 12451 ft: 14404 corp: 17/119b lim: 10 exec/s: 18 rss: 75Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:45.731 [2024-11-26 18:57:02.888045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.888072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.888128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.888144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.888216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.888234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.731 [2024-11-26 18:57:02.888313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003ddd cdw11:00000000 00:06:45.731 [2024-11-26 18:57:02.888330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.731 #19 NEW cov: 12451 ft: 14465 corp: 18/128b lim: 10 exec/s: 19 rss: 75Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:45.990 [2024-11-26 18:57:02.958231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:02.958260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:02.958311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:02.958327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:02.958402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:45.990 [2024-11-26 18:57:02.958421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.990 #20 NEW cov: 12451 ft: 14488 corp: 19/135b lim: 10 exec/s: 20 rss: 75Mb L: 7/10 MS: 1 CopyPart- 00:06:45.990 [2024-11-26 18:57:03.008288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.008314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.008365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.008380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.008449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007e0a cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.008464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.990 #21 NEW cov: 12451 ft: 14542 corp: 20/141b lim: 10 exec/s: 21 rss: 75Mb L: 6/10 MS: 1 InsertByte- 00:06:45.990 [2024-11-26 18:57:03.078950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.078976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.079047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8f1 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.079064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.079143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.079159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.079240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.079256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.990 #22 NEW cov: 12451 ft: 14570 corp: 21/149b lim: 10 exec/s: 22 rss: 75Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:45.990 [2024-11-26 18:57:03.129004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.129032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.129092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.129111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.129187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e866 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.129208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.990 #23 NEW cov: 12451 ft: 14592 corp: 22/156b lim: 10 exec/s: 23 rss: 75Mb L: 7/10 MS: 1 InsertByte- 00:06:45.990 [2024-11-26 18:57:03.179287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.179312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.179363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.179378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.990 [2024-11-26 18:57:03.179447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000aea cdw11:00000000 00:06:45.990 [2024-11-26 18:57:03.179463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.990 #24 NEW cov: 12451 ft: 14624 corp: 23/163b lim: 10 exec/s: 24 rss: 75Mb L: 7/10 MS: 1 ChangeBit- 00:06:46.249 [2024-11-26 18:57:03.229871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e800 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.229898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.229945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.229961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.230032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000078 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.230048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.230132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.230147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.249 #25 NEW cov: 12451 ft: 14698 corp: 24/172b lim: 10 exec/s: 25 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:46.249 [2024-11-26 18:57:03.279418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.279443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.279493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aa4 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.279508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.249 #26 NEW cov: 12451 ft: 14709 corp: 25/177b lim: 10 exec/s: 26 rss: 75Mb L: 5/10 MS: 1 EraseBytes- 00:06:46.249 [2024-11-26 18:57:03.330662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.330689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.330742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.330757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.330835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.330855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.330935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003d3d cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.330952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.331030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000dd0a cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.331049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.249 #27 NEW cov: 12451 ft: 14719 corp: 26/187b lim: 10 exec/s: 27 rss: 75Mb L: 10/10 MS: 1 CrossOver- 00:06:46.249 [2024-11-26 18:57:03.399959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e878 cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.399988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.249 [2024-11-26 18:57:03.400046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:46.249 [2024-11-26 18:57:03.400061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.249 #28 NEW cov: 12451 ft: 14737 corp: 27/191b lim: 10 exec/s: 28 rss: 75Mb L: 4/10 MS: 1 EraseBytes- 00:06:46.507 [2024-11-26 18:57:03.470877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.470904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.470957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.470972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.471038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae8 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.471055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.471136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002121 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.471154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.507 #29 NEW cov: 12451 ft: 14746 corp: 28/199b lim: 10 exec/s: 29 rss: 76Mb L: 8/10 MS: 1 InsertByte- 00:06:46.507 [2024-11-26 18:57:03.520980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.521007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.521062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f8e8 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.521078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.521154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e80a cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.521171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.591744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e821 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.591770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.591835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e8f8 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.591851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.591920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.591936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.507 [2024-11-26 18:57:03.592018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a21 cdw11:00000000 00:06:46.507 [2024-11-26 18:57:03.592033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.507 #31 NEW cov: 12451 ft: 14767 corp: 29/207b lim: 10 exec/s: 15 rss: 76Mb L: 8/10 MS: 2 ShuffleBytes-InsertByte- 00:06:46.507 #31 DONE cov: 12451 ft: 14767 corp: 29/207b lim: 10 exec/s: 15 rss: 76Mb 00:06:46.507 Done 31 runs in 2 second(s) 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.764 18:57:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:46.764 [2024-11-26 18:57:03.768944] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:46.764 [2024-11-26 18:57:03.769010] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739380 ] 00:06:46.764 [2024-11-26 18:57:03.962659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.022 [2024-11-26 18:57:04.003494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.022 [2024-11-26 18:57:04.062625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.022 [2024-11-26 18:57:04.078772] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:47.022 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.022 INFO: Seed: 3380281102 00:06:47.022 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:47.022 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:47.022 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:47.022 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.022 [2024-11-26 18:57:04.127418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.022 [2024-11-26 18:57:04.127469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.022 #2 INITED cov: 12226 ft: 12246 corp: 1/1b exec/s: 0 rss: 73Mb 00:06:47.022 [2024-11-26 18:57:04.177509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.022 [2024-11-26 18:57:04.177543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.535 NEW_FUNC[1/1]: 0x1c400a8 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:595 00:06:47.535 #3 NEW cov: 12364 ft: 13011 corp: 2/2b lim: 5 exec/s: 0 rss: 75Mb L: 1/1 MS: 1 ChangeBinInt- 00:06:47.535 [2024-11-26 18:57:04.560858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.535 [2024-11-26 18:57:04.560927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.535 #4 NEW cov: 12370 ft: 13323 corp: 3/3b lim: 5 exec/s: 0 rss: 75Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:47.535 [2024-11-26 18:57:04.620931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.535 [2024-11-26 18:57:04.620965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.535 #5 NEW cov: 12455 ft: 13608 corp: 4/4b lim: 5 exec/s: 0 rss: 75Mb L: 1/1 MS: 1 ChangeBit- 00:06:47.535 [2024-11-26 18:57:04.692534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.535 [2024-11-26 18:57:04.692564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.535 [2024-11-26 18:57:04.692666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.535 [2024-11-26 18:57:04.692684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.535 [2024-11-26 18:57:04.692773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.535 [2024-11-26 18:57:04.692790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.535 [2024-11-26 18:57:04.692883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.535 [2024-11-26 18:57:04.692899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.535 [2024-11-26 18:57:04.692996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.535 [2024-11-26 18:57:04.693016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.535 #6 NEW cov: 12455 ft: 14483 corp: 5/9b lim: 5 exec/s: 0 rss: 75Mb L: 5/5 MS: 1 CMP- DE: "\001\000\000\001"- 00:06:47.792 [2024-11-26 18:57:04.752885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.752913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.753005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.753023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.753126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.753142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.753238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.753256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.753349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.753366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.793 #7 NEW cov: 12455 ft: 14533 corp: 6/14b lim: 5 exec/s: 0 rss: 75Mb L: 5/5 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:06:47.793 [2024-11-26 18:57:04.821786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.821815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.793 #8 NEW cov: 12455 ft: 14615 corp: 7/15b lim: 5 exec/s: 0 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:06:47.793 [2024-11-26 18:57:04.871839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.871868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.793 #9 NEW cov: 12455 ft: 14653 corp: 8/16b lim: 5 exec/s: 0 rss: 75Mb L: 1/5 MS: 1 CrossOver- 00:06:47.793 [2024-11-26 18:57:04.942401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.942432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.942523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.942541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.793 #10 NEW cov: 12455 ft: 14899 corp: 9/18b lim: 5 exec/s: 0 rss: 75Mb L: 2/5 MS: 1 CopyPart- 00:06:47.793 [2024-11-26 18:57:04.993252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.993288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.993379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.993397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.993501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.993519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.793 [2024-11-26 18:57:04.993614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.793 [2024-11-26 18:57:04.993642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.050 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:48.050 #11 NEW cov: 12472 ft: 14959 corp: 10/22b lim: 5 exec/s: 0 rss: 75Mb L: 4/5 MS: 1 EraseBytes- 00:06:48.050 [2024-11-26 18:57:05.072442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.072480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.050 #12 NEW cov: 12472 ft: 14966 corp: 11/23b lim: 5 exec/s: 12 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:06:48.050 [2024-11-26 18:57:05.143504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.143536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.050 [2024-11-26 18:57:05.143627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.143645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.050 [2024-11-26 18:57:05.143735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.143752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.050 #13 NEW cov: 12472 ft: 15141 corp: 12/26b lim: 5 exec/s: 13 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:06:48.050 [2024-11-26 18:57:05.214141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.214169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.050 [2024-11-26 18:57:05.214257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.214274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.050 [2024-11-26 18:57:05.214367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.214385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.050 [2024-11-26 18:57:05.214490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.050 [2024-11-26 18:57:05.214507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.050 #14 NEW cov: 12472 ft: 15172 corp: 13/30b lim: 5 exec/s: 14 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:48.308 [2024-11-26 18:57:05.273975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.274005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.274100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.274118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.274215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.274233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.308 #15 NEW cov: 12472 ft: 15181 corp: 14/33b lim: 5 exec/s: 15 rss: 75Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:48.308 [2024-11-26 18:57:05.355188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.355217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.355311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.355327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.355425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.355442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.355553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.355571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.355664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.355681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.308 #16 NEW cov: 12472 ft: 15229 corp: 15/38b lim: 5 exec/s: 16 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:48.308 [2024-11-26 18:57:05.414876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.414903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.414991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.415007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.415102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.415119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.415205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.415222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.308 #17 NEW cov: 12472 ft: 15249 corp: 16/42b lim: 5 exec/s: 17 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:48.308 [2024-11-26 18:57:05.495624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.495653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.495749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.495768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.495861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.495880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.495970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.495986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.308 [2024-11-26 18:57:05.496074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.308 [2024-11-26 18:57:05.496091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.564 #18 NEW cov: 12472 ft: 15351 corp: 17/47b lim: 5 exec/s: 18 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:06:48.564 [2024-11-26 18:57:05.544711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.564 [2024-11-26 18:57:05.544739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.564 [2024-11-26 18:57:05.544834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.564 [2024-11-26 18:57:05.544852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.565 #19 NEW cov: 12472 ft: 15366 corp: 18/49b lim: 5 exec/s: 19 rss: 75Mb L: 2/5 MS: 1 CrossOver- 00:06:48.565 [2024-11-26 18:57:05.594439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.594467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.565 #20 NEW cov: 12472 ft: 15381 corp: 19/50b lim: 5 exec/s: 20 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:06:48.565 [2024-11-26 18:57:05.645112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.645144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.565 [2024-11-26 18:57:05.645234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.645252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.565 #21 NEW cov: 12472 ft: 15393 corp: 20/52b lim: 5 exec/s: 21 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:48.565 [2024-11-26 18:57:05.715449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.715480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.565 [2024-11-26 18:57:05.715572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.715589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.565 #22 NEW cov: 12472 ft: 15409 corp: 21/54b lim: 5 exec/s: 22 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:48.565 [2024-11-26 18:57:05.765978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.766007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.565 [2024-11-26 18:57:05.766102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.766122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.565 [2024-11-26 18:57:05.766209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.565 [2024-11-26 18:57:05.766226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.822 #23 NEW cov: 12472 ft: 15420 corp: 22/57b lim: 5 exec/s: 23 rss: 76Mb L: 3/5 MS: 1 EraseBytes- 00:06:48.822 [2024-11-26 18:57:05.837014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.837040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.822 [2024-11-26 18:57:05.837136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.837153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.822 [2024-11-26 18:57:05.837245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.837260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.822 [2024-11-26 18:57:05.837350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.837367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.822 [2024-11-26 18:57:05.837454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.837476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.822 #24 NEW cov: 12472 ft: 15442 corp: 23/62b lim: 5 exec/s: 24 rss: 76Mb L: 5/5 MS: 1 ChangeByte- 00:06:48.822 [2024-11-26 18:57:05.907311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.907337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.822 [2024-11-26 18:57:05.907423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.907441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.822 [2024-11-26 18:57:05.907534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.822 [2024-11-26 18:57:05.907553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.823 [2024-11-26 18:57:05.907648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.823 [2024-11-26 18:57:05.907665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.823 [2024-11-26 18:57:05.907754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.823 [2024-11-26 18:57:05.907772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.823 #25 NEW cov: 12472 ft: 15479 corp: 24/67b lim: 5 exec/s: 25 rss: 76Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:48.823 [2024-11-26 18:57:05.976406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.823 [2024-11-26 18:57:05.976433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.823 [2024-11-26 18:57:05.976522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.823 [2024-11-26 18:57:05.976539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.823 #26 NEW cov: 12479 ft: 15504 corp: 25/69b lim: 5 exec/s: 26 rss: 76Mb L: 2/5 MS: 1 ChangeBit- 00:06:49.082 [2024-11-26 18:57:06.046273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.082 [2024-11-26 18:57:06.046302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.082 #27 NEW cov: 12479 ft: 15507 corp: 26/70b lim: 5 exec/s: 27 rss: 76Mb L: 1/5 MS: 1 ChangeByte- 00:06:49.082 [2024-11-26 18:57:06.096770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.082 [2024-11-26 18:57:06.096796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.082 [2024-11-26 18:57:06.096885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.082 [2024-11-26 18:57:06.096904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.082 #28 NEW cov: 12479 ft: 15522 corp: 27/72b lim: 5 exec/s: 14 rss: 76Mb L: 2/5 MS: 1 EraseBytes- 00:06:49.082 #28 DONE cov: 12479 ft: 15522 corp: 27/72b lim: 5 exec/s: 14 rss: 76Mb 00:06:49.082 ###### Recommended dictionary. ###### 00:06:49.082 "\001\000\000\001" # Uses: 1 00:06:49.082 ###### End of recommended dictionary. ###### 00:06:49.082 Done 28 runs in 2 second(s) 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.082 18:57:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:49.342 [2024-11-26 18:57:06.298790] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:49.342 [2024-11-26 18:57:06.298858] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740117 ] 00:06:49.342 [2024-11-26 18:57:06.488847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.342 [2024-11-26 18:57:06.527547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.600 [2024-11-26 18:57:06.586874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.600 [2024-11-26 18:57:06.603015] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:49.600 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.600 INFO: Seed: 1610310093 00:06:49.600 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:49.600 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:49.600 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:49.601 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.601 [2024-11-26 18:57:06.658539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.601 [2024-11-26 18:57:06.658573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.601 #2 INITED cov: 12248 ft: 12241 corp: 1/1b exec/s: 0 rss: 73Mb 00:06:49.601 [2024-11-26 18:57:06.698538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.601 [2024-11-26 18:57:06.698565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.860 NEW_FUNC[1/1]: 0x195fb18 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1190 00:06:49.860 #3 NEW cov: 12364 ft: 12882 corp: 2/2b lim: 5 exec/s: 0 rss: 74Mb L: 1/1 MS: 1 ChangeByte- 00:06:49.860 [2024-11-26 18:57:07.020197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.020246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.860 [2024-11-26 18:57:07.020326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.020352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.860 [2024-11-26 18:57:07.020430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.020458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.860 [2024-11-26 18:57:07.020559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.020586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.860 [2024-11-26 18:57:07.020667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.020693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.860 #4 NEW cov: 12370 ft: 13967 corp: 3/7b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:49.860 [2024-11-26 18:57:07.069846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.069876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.860 [2024-11-26 18:57:07.069943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.069964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.860 [2024-11-26 18:57:07.070032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.860 [2024-11-26 18:57:07.070054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.119 #5 NEW cov: 12455 ft: 14423 corp: 4/10b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 CMP- DE: "\015\000"- 00:06:50.119 [2024-11-26 18:57:07.109619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.109652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.119 #6 NEW cov: 12455 ft: 14475 corp: 5/11b lim: 5 exec/s: 0 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:50.119 [2024-11-26 18:57:07.170407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.170435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.170487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.170504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.170570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.170589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.170655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.170676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.170744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.170760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.119 #7 NEW cov: 12455 ft: 14558 corp: 6/16b lim: 5 exec/s: 0 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:06:50.119 [2024-11-26 18:57:07.230401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.230427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.230501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.230521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.230587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.230605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.230672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.230688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.119 #8 NEW cov: 12455 ft: 14690 corp: 7/20b lim: 5 exec/s: 0 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:50.119 [2024-11-26 18:57:07.270657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.270683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.119 [2024-11-26 18:57:07.270748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.119 [2024-11-26 18:57:07.270773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.120 [2024-11-26 18:57:07.270840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.120 [2024-11-26 18:57:07.270858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.120 [2024-11-26 18:57:07.270938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.120 [2024-11-26 18:57:07.270955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.120 [2024-11-26 18:57:07.271021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.120 [2024-11-26 18:57:07.271037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.120 #9 NEW cov: 12455 ft: 14743 corp: 8/25b lim: 5 exec/s: 0 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:06:50.120 [2024-11-26 18:57:07.330833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.120 [2024-11-26 18:57:07.330861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.330930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.330949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.331016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.331038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.331112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.331132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.331199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.331218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.380 #10 NEW cov: 12455 ft: 14768 corp: 9/30b lim: 5 exec/s: 0 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:06:50.380 [2024-11-26 18:57:07.391021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.391048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.391116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.391135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.391201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.391223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.391288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.391304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.391369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.391385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.380 #11 NEW cov: 12455 ft: 14808 corp: 10/35b lim: 5 exec/s: 0 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:50.380 [2024-11-26 18:57:07.430960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.430986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.431053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.431072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.431137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.431155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.431221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.431237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.380 #12 NEW cov: 12455 ft: 14881 corp: 11/39b lim: 5 exec/s: 0 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:06:50.380 [2024-11-26 18:57:07.471060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.471086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.471154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.471173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.471240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.471258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.471323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.471341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.380 #13 NEW cov: 12455 ft: 14927 corp: 12/43b lim: 5 exec/s: 0 rss: 75Mb L: 4/5 MS: 1 EraseBytes- 00:06:50.380 [2024-11-26 18:57:07.511005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.380 [2024-11-26 18:57:07.511036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.380 [2024-11-26 18:57:07.511103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.381 [2024-11-26 18:57:07.511122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.381 [2024-11-26 18:57:07.511188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.381 [2024-11-26 18:57:07.511206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.381 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:50.381 #14 NEW cov: 12478 ft: 14970 corp: 13/46b lim: 5 exec/s: 0 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:06:50.381 [2024-11-26 18:57:07.571549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.381 [2024-11-26 18:57:07.571577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.381 [2024-11-26 18:57:07.571653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.381 [2024-11-26 18:57:07.571673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.381 [2024-11-26 18:57:07.571738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.381 [2024-11-26 18:57:07.571758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.381 [2024-11-26 18:57:07.571824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.381 [2024-11-26 18:57:07.571840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.381 [2024-11-26 18:57:07.571906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.381 [2024-11-26 18:57:07.571923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.640 #15 NEW cov: 12478 ft: 15038 corp: 14/51b lim: 5 exec/s: 0 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:06:50.640 [2024-11-26 18:57:07.631533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.631559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.631627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.631646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.631714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.631733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.631803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.631819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.640 #16 NEW cov: 12478 ft: 15043 corp: 15/55b lim: 5 exec/s: 16 rss: 75Mb L: 4/5 MS: 1 ChangeBinInt- 00:06:50.640 [2024-11-26 18:57:07.671311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.671337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.671406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.671424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.640 #17 NEW cov: 12478 ft: 15222 corp: 16/57b lim: 5 exec/s: 17 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:06:50.640 [2024-11-26 18:57:07.731526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.731553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.731622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.731640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.640 #18 NEW cov: 12478 ft: 15310 corp: 17/59b lim: 5 exec/s: 18 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:50.640 [2024-11-26 18:57:07.791505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.791531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.640 #19 NEW cov: 12478 ft: 15399 corp: 18/60b lim: 5 exec/s: 19 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:50.640 [2024-11-26 18:57:07.832263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.832290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.832372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.832393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.832460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.832486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.832555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.832573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.640 [2024-11-26 18:57:07.832641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.640 [2024-11-26 18:57:07.832661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.899 #20 NEW cov: 12478 ft: 15438 corp: 19/65b lim: 5 exec/s: 20 rss: 75Mb L: 5/5 MS: 1 InsertByte- 00:06:50.899 [2024-11-26 18:57:07.892288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.899 [2024-11-26 18:57:07.892315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.899 [2024-11-26 18:57:07.892382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.899 [2024-11-26 18:57:07.892401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.899 [2024-11-26 18:57:07.892466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.899 [2024-11-26 18:57:07.892491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.899 [2024-11-26 18:57:07.892557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.899 [2024-11-26 18:57:07.892574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.899 #21 NEW cov: 12478 ft: 15447 corp: 20/69b lim: 5 exec/s: 21 rss: 75Mb L: 4/5 MS: 1 InsertByte- 00:06:50.899 [2024-11-26 18:57:07.932541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.899 [2024-11-26 18:57:07.932567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.899 [2024-11-26 18:57:07.932632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.899 [2024-11-26 18:57:07.932651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.899 [2024-11-26 18:57:07.932719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.899 [2024-11-26 18:57:07.932740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:07.932805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:07.932821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:07.932885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:07.932901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.900 #22 NEW cov: 12478 ft: 15464 corp: 21/74b lim: 5 exec/s: 22 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:06:50.900 [2024-11-26 18:57:07.992725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:07.992751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:07.992817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:07.992840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:07.992908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:07.992924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:07.992991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:07.993009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:07.993074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:07.993091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.900 #23 NEW cov: 12478 ft: 15537 corp: 22/79b lim: 5 exec/s: 23 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:06:50.900 [2024-11-26 18:57:08.052416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:08.052442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:08.052515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:08.052534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.900 #24 NEW cov: 12478 ft: 15544 corp: 23/81b lim: 5 exec/s: 24 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:06:50.900 [2024-11-26 18:57:08.092507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:08.092534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.900 [2024-11-26 18:57:08.092601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.900 [2024-11-26 18:57:08.092622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.228 #25 NEW cov: 12478 ft: 15619 corp: 24/83b lim: 5 exec/s: 25 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:51.228 [2024-11-26 18:57:08.152875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.152904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.152972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.152992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.153059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.153076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.228 #26 NEW cov: 12478 ft: 15625 corp: 25/86b lim: 5 exec/s: 26 rss: 76Mb L: 3/5 MS: 1 CrossOver- 00:06:51.228 [2024-11-26 18:57:08.213043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.213070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.213139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.213158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.213226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.213243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.253117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.253143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.253212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.253231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.253298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.253317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.228 #28 NEW cov: 12478 ft: 15628 corp: 26/89b lim: 5 exec/s: 28 rss: 76Mb L: 3/5 MS: 2 CrossOver-ChangeByte- 00:06:51.228 [2024-11-26 18:57:08.293228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.293254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.293322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.293340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.293408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.293425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.228 #29 NEW cov: 12478 ft: 15632 corp: 27/92b lim: 5 exec/s: 29 rss: 76Mb L: 3/5 MS: 1 EraseBytes- 00:06:51.228 [2024-11-26 18:57:08.333442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.333469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.333544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.333562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.333636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.333655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.333724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.333743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.228 #30 NEW cov: 12478 ft: 15644 corp: 28/96b lim: 5 exec/s: 30 rss: 76Mb L: 4/5 MS: 1 PersAutoDict- DE: "\015\000"- 00:06:51.228 [2024-11-26 18:57:08.373432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.373459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.373536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.373557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.228 [2024-11-26 18:57:08.373628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.228 [2024-11-26 18:57:08.373651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.588 #31 NEW cov: 12478 ft: 15661 corp: 29/99b lim: 5 exec/s: 31 rss: 76Mb L: 3/5 MS: 1 EraseBytes- 00:06:51.588 [2024-11-26 18:57:08.433596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.588 [2024-11-26 18:57:08.433624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.588 [2024-11-26 18:57:08.433692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.588 [2024-11-26 18:57:08.433712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.588 [2024-11-26 18:57:08.433780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.588 [2024-11-26 18:57:08.433800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.588 #32 NEW cov: 12478 ft: 15666 corp: 30/102b lim: 5 exec/s: 32 rss: 76Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:51.588 [2024-11-26 18:57:08.494075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.588 [2024-11-26 18:57:08.494101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.588 [2024-11-26 18:57:08.494170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.588 [2024-11-26 18:57:08.494189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.494257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.494281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.494346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.494362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.494429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.494445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.589 #33 NEW cov: 12478 ft: 15669 corp: 31/107b lim: 5 exec/s: 33 rss: 76Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:51.589 [2024-11-26 18:57:08.554234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.554261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.554327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.554346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.554412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.554430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.554500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.554518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.554588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.554606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.589 #34 NEW cov: 12478 ft: 15675 corp: 32/112b lim: 5 exec/s: 34 rss: 76Mb L: 5/5 MS: 1 CrossOver- 00:06:51.589 [2024-11-26 18:57:08.614090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.614116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.614183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.614201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.589 [2024-11-26 18:57:08.614267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.589 [2024-11-26 18:57:08.614285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.589 #35 NEW cov: 12478 ft: 15688 corp: 33/115b lim: 5 exec/s: 17 rss: 76Mb L: 3/5 MS: 1 ChangeBit- 00:06:51.589 #35 DONE cov: 12478 ft: 15688 corp: 33/115b lim: 5 exec/s: 17 rss: 76Mb 00:06:51.589 ###### Recommended dictionary. ###### 00:06:51.589 "\015\000" # Uses: 1 00:06:51.589 ###### End of recommended dictionary. ###### 00:06:51.589 Done 35 runs in 2 second(s) 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.891 18:57:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:51.891 [2024-11-26 18:57:08.811278] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:51.891 [2024-11-26 18:57:08.811350] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740486 ] 00:06:51.891 [2024-11-26 18:57:08.995002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.891 [2024-11-26 18:57:09.032984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.891 [2024-11-26 18:57:09.091998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.148 [2024-11-26 18:57:09.108154] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:52.148 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.148 INFO: Seed: 4116331644 00:06:52.148 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:52.148 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:52.148 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:52.148 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.148 #2 INITED exec/s: 0 rss: 67Mb 00:06:52.148 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.148 This may also happen if the target rejected all inputs we tried so far 00:06:52.148 [2024-11-26 18:57:09.153014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4884 cdw11:cc58e4e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.148 [2024-11-26 18:57:09.153049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.406 NEW_FUNC[1/716]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:52.406 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.406 #7 NEW cov: 12271 ft: 12249 corp: 2/10b lim: 40 exec/s: 0 rss: 74Mb L: 9/9 MS: 5 ChangeBinInt-ChangeByte-InsertByte-EraseBytes-CMP- DE: "\377H\204\314X\344\345@"- 00:06:52.406 [2024-11-26 18:57:09.524010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4884cc cdw11:58ff4884 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.406 [2024-11-26 18:57:09.524054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.406 #8 NEW cov: 12387 ft: 12658 corp: 3/23b lim: 40 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:06:52.406 [2024-11-26 18:57:09.614170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff4884cc cdw11:978024aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.406 [2024-11-26 18:57:09.614205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.665 #9 NEW cov: 12393 ft: 13060 corp: 4/32b lim: 40 exec/s: 0 rss: 74Mb L: 9/13 MS: 1 CMP- DE: "\377H\204\314\227\200$\252"- 00:06:52.665 [2024-11-26 18:57:09.674243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4884 cdw11:cc58a4e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.665 [2024-11-26 18:57:09.674276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.665 #10 NEW cov: 12478 ft: 13411 corp: 5/41b lim: 40 exec/s: 0 rss: 74Mb L: 9/13 MS: 1 ChangeBit- 00:06:52.665 [2024-11-26 18:57:09.734399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4884cc cdw11:58ff4884 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.665 [2024-11-26 18:57:09.734430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.665 #11 NEW cov: 12478 ft: 13507 corp: 6/54b lim: 40 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:06:52.665 [2024-11-26 18:57:09.824678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4823cc cdw11:58ff4884 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.665 [2024-11-26 18:57:09.824710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.665 #12 NEW cov: 12478 ft: 13590 corp: 7/67b lim: 40 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:06:52.923 [2024-11-26 18:57:09.884858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4884 cdw11:ff4884cc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.923 [2024-11-26 18:57:09.884890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.923 [2024-11-26 18:57:09.884925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:978024cc cdw11:58e4e5aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.923 [2024-11-26 18:57:09.884941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.923 #13 NEW cov: 12478 ft: 14017 corp: 8/85b lim: 40 exec/s: 0 rss: 75Mb L: 18/18 MS: 1 CrossOver- 00:06:52.923 [2024-11-26 18:57:09.944918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb48233b cdw11:a700b77b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.923 [2024-11-26 18:57:09.944949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.923 #14 NEW cov: 12478 ft: 14066 corp: 9/98b lim: 40 exec/s: 0 rss: 75Mb L: 13/18 MS: 1 ChangeBinInt- 00:06:52.923 [2024-11-26 18:57:10.035284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4884 cdw11:cc2758a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.923 [2024-11-26 18:57:10.035326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.923 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:52.923 #15 NEW cov: 12495 ft: 14134 corp: 10/108b lim: 40 exec/s: 0 rss: 75Mb L: 10/18 MS: 1 InsertByte- 00:06:53.181 [2024-11-26 18:57:10.135558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4884 cdw11:ff48aa40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.181 [2024-11-26 18:57:10.135597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.181 #16 NEW cov: 12495 ft: 14211 corp: 11/117b lim: 40 exec/s: 16 rss: 75Mb L: 9/18 MS: 1 EraseBytes- 00:06:53.181 [2024-11-26 18:57:10.225747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb214884 cdw11:ff48aa40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.181 [2024-11-26 18:57:10.225781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.181 #17 NEW cov: 12495 ft: 14230 corp: 12/126b lim: 40 exec/s: 17 rss: 75Mb L: 9/18 MS: 1 ChangeByte- 00:06:53.181 [2024-11-26 18:57:10.315922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:a700b77b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.181 [2024-11-26 18:57:10.315953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.181 #18 NEW cov: 12495 ft: 14252 corp: 13/139b lim: 40 exec/s: 18 rss: 75Mb L: 13/18 MS: 1 ChangeBinInt- 00:06:53.439 [2024-11-26 18:57:10.406256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb214884 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.406288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.439 [2024-11-26 18:57:10.406323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff48 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.406340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.439 #19 NEW cov: 12495 ft: 14269 corp: 14/158b lim: 40 exec/s: 19 rss: 75Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:53.439 [2024-11-26 18:57:10.496455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4884cc cdw11:58ff48fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.496492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.439 [2024-11-26 18:57:10.496528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4884cc58 cdw11:ff4884cc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.496544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.439 #20 NEW cov: 12495 ft: 14285 corp: 15/178b lim: 40 exec/s: 20 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:06:53.439 [2024-11-26 18:57:10.556762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.556793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.439 [2024-11-26 18:57:10.556829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.556845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.439 [2024-11-26 18:57:10.556882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.556899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.439 [2024-11-26 18:57:10.556931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0084cc27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.556947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.439 #21 NEW cov: 12495 ft: 14798 corp: 16/214b lim: 40 exec/s: 21 rss: 75Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:53.439 [2024-11-26 18:57:10.646838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4884cc cdw11:58ff4884 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.439 [2024-11-26 18:57:10.646870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.698 #22 NEW cov: 12495 ft: 14813 corp: 17/228b lim: 40 exec/s: 22 rss: 75Mb L: 14/36 MS: 1 InsertByte- 00:06:53.698 [2024-11-26 18:57:10.697036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4884 cdw11:cc580000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.697068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.698 [2024-11-26 18:57:10.697102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0001a4e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.697118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.698 #23 NEW cov: 12495 ft: 14847 corp: 18/245b lim: 40 exec/s: 23 rss: 75Mb L: 17/36 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:06:53.698 [2024-11-26 18:57:10.757279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.757310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.698 [2024-11-26 18:57:10.757345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.757361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.698 [2024-11-26 18:57:10.757392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.757408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.698 [2024-11-26 18:57:10.757438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.757453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.698 #24 NEW cov: 12495 ft: 14876 corp: 19/284b lim: 40 exec/s: 24 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:53.698 [2024-11-26 18:57:10.817273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4884cc cdw11:58ff48f9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.817303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.698 [2024-11-26 18:57:10.817336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4884cc58 cdw11:ff4884cc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.817355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.698 #25 NEW cov: 12495 ft: 14903 corp: 20/304b lim: 40 exec/s: 25 rss: 75Mb L: 20/39 MS: 1 ChangeBit- 00:06:53.698 [2024-11-26 18:57:10.907516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4884cc cdw11:58ff4884 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.698 [2024-11-26 18:57:10.907548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.957 #26 NEW cov: 12495 ft: 14912 corp: 21/317b lim: 40 exec/s: 26 rss: 75Mb L: 13/39 MS: 1 CopyPart- 00:06:53.958 [2024-11-26 18:57:10.997840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.958 [2024-11-26 18:57:10.997872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.958 [2024-11-26 18:57:10.997906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.958 [2024-11-26 18:57:10.997938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.958 [2024-11-26 18:57:10.997969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.958 [2024-11-26 18:57:10.997986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.958 #27 NEW cov: 12502 ft: 15119 corp: 22/348b lim: 40 exec/s: 27 rss: 75Mb L: 31/39 MS: 1 InsertRepeatedBytes- 00:06:53.958 [2024-11-26 18:57:11.057893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fbff4884 cdw11:cc58e4e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.958 [2024-11-26 18:57:11.057925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.958 #28 NEW cov: 12502 ft: 15142 corp: 23/358b lim: 40 exec/s: 28 rss: 75Mb L: 10/39 MS: 1 InsertByte- 00:06:53.958 [2024-11-26 18:57:11.108039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fb4884cc cdw11:58ff48fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.958 [2024-11-26 18:57:11.108069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.958 [2024-11-26 18:57:11.108102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4884cc58 cdw11:ff7e84cc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.958 [2024-11-26 18:57:11.108117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.958 #29 NEW cov: 12502 ft: 15239 corp: 24/378b lim: 40 exec/s: 14 rss: 75Mb L: 20/39 MS: 1 ChangeByte- 00:06:53.958 #29 DONE cov: 12502 ft: 15239 corp: 24/378b lim: 40 exec/s: 14 rss: 75Mb 00:06:53.958 ###### Recommended dictionary. ###### 00:06:53.958 "\377H\204\314X\344\345@" # Uses: 0 00:06:53.958 "\377H\204\314\227\200$\252" # Uses: 0 00:06:53.958 "\000\000\000\000\000\000\000\001" # Uses: 0 00:06:53.958 ###### End of recommended dictionary. ###### 00:06:53.958 Done 29 runs in 2 second(s) 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.218 18:57:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:54.218 [2024-11-26 18:57:11.301280] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:54.218 [2024-11-26 18:57:11.301346] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740842 ] 00:06:54.476 [2024-11-26 18:57:11.490974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.476 [2024-11-26 18:57:11.529170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.476 [2024-11-26 18:57:11.588136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.476 [2024-11-26 18:57:11.604292] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:54.476 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.476 INFO: Seed: 2317351876 00:06:54.476 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:54.476 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:54.476 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:54.476 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.476 #2 INITED exec/s: 0 rss: 67Mb 00:06:54.477 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.477 This may also happen if the target rejected all inputs we tried so far 00:06:54.477 [2024-11-26 18:57:11.660286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-11-26 18:57:11.660317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.477 [2024-11-26 18:57:11.660388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-11-26 18:57:11.660406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.477 [2024-11-26 18:57:11.660480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-11-26 18:57:11.660503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.477 [2024-11-26 18:57:11.660570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-11-26 18:57:11.660590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.002 NEW_FUNC[1/717]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:55.002 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.002 #5 NEW cov: 12286 ft: 12285 corp: 2/37b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:06:55.002 [2024-11-26 18:57:12.000735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0fb155dd cdw11:cd844900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.002 [2024-11-26 18:57:12.000772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.002 #9 NEW cov: 12399 ft: 13816 corp: 3/46b lim: 40 exec/s: 0 rss: 74Mb L: 9/36 MS: 4 ChangeByte-CrossOver-ChangeByte-CMP- DE: "\017\261U\335\315\204I\000"- 00:06:55.002 [2024-11-26 18:57:12.050793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0fb155 cdw11:ddcd8449 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.002 [2024-11-26 18:57:12.050821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.002 #14 NEW cov: 12405 ft: 14048 corp: 4/57b lim: 40 exec/s: 0 rss: 74Mb L: 11/36 MS: 5 CopyPart-InsertByte-ShuffleBytes-InsertByte-PersAutoDict- DE: "\017\261U\335\315\204I\000"- 00:06:55.002 [2024-11-26 18:57:12.091053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0fb155 cdw11:dd6df3bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.003 [2024-11-26 18:57:12.091080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.003 [2024-11-26 18:57:12.091152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e9cd8449 cdw11:00cd8449 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.003 [2024-11-26 18:57:12.091171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.003 #15 NEW cov: 12490 ft: 14505 corp: 5/76b lim: 40 exec/s: 0 rss: 74Mb L: 19/36 MS: 1 CMP- DE: "m\363\275\351\315\204I\000"- 00:06:55.003 [2024-11-26 18:57:12.151054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0fb155 cdw11:ddcd84b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.003 [2024-11-26 18:57:12.151080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.003 #16 NEW cov: 12490 ft: 14679 corp: 6/87b lim: 40 exec/s: 0 rss: 74Mb L: 11/36 MS: 1 CopyPart- 00:06:55.003 [2024-11-26 18:57:12.191639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.003 [2024-11-26 18:57:12.191666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.003 [2024-11-26 18:57:12.191738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.003 [2024-11-26 18:57:12.191758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.003 [2024-11-26 18:57:12.191825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.003 [2024-11-26 18:57:12.191846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.003 [2024-11-26 18:57:12.191914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.003 [2024-11-26 18:57:12.191930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.265 #19 NEW cov: 12490 ft: 14811 corp: 7/122b lim: 40 exec/s: 0 rss: 74Mb L: 35/36 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:06:55.265 [2024-11-26 18:57:12.231765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.231793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.231880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.231900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.231969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0055ddcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.231986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.232056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:84490000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.232073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.265 #20 NEW cov: 12490 ft: 14903 corp: 8/158b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 CrossOver- 00:06:55.265 [2024-11-26 18:57:12.291937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.291964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.292052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000007e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.292073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.292144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0055ddcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.292165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.292237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:84490000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.292254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.265 #21 NEW cov: 12490 ft: 14920 corp: 9/194b lim: 40 exec/s: 0 rss: 75Mb L: 36/36 MS: 1 ChangeByte- 00:06:55.265 [2024-11-26 18:57:12.351660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0fb155 cdw11:ddcd28b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.351688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.265 #22 NEW cov: 12490 ft: 15016 corp: 10/205b lim: 40 exec/s: 0 rss: 75Mb L: 11/36 MS: 1 ChangeByte- 00:06:55.265 [2024-11-26 18:57:12.412308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.412339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.412413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.412433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.412511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.412533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.412602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.412619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.265 #23 NEW cov: 12490 ft: 15141 corp: 11/242b lim: 40 exec/s: 0 rss: 75Mb L: 37/37 MS: 1 InsertByte- 00:06:55.265 [2024-11-26 18:57:12.452134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.452165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.265 [2024-11-26 18:57:12.452243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.265 [2024-11-26 18:57:12.452265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.527 #24 NEW cov: 12490 ft: 15233 corp: 12/261b lim: 40 exec/s: 0 rss: 75Mb L: 19/37 MS: 1 EraseBytes- 00:06:55.527 [2024-11-26 18:57:12.512431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0fb155 cdw11:dd6df3bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-11-26 18:57:12.512459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.527 [2024-11-26 18:57:12.512539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01010101 cdw11:01010101 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-11-26 18:57:12.512560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.527 [2024-11-26 18:57:12.512643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:01e9cd84 cdw11:4900cd84 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-11-26 18:57:12.512666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.527 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:55.527 #25 NEW cov: 12513 ft: 15486 corp: 13/289b lim: 40 exec/s: 0 rss: 75Mb L: 28/37 MS: 1 InsertRepeatedBytes- 00:06:55.527 [2024-11-26 18:57:12.572727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00f6ffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-11-26 18:57:12.572754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.527 [2024-11-26 18:57:12.572812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-11-26 18:57:12.572826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.527 [2024-11-26 18:57:12.572882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.527 [2024-11-26 18:57:12.572896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.528 [2024-11-26 18:57:12.572948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.528 [2024-11-26 18:57:12.572962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.528 #26 NEW cov: 12513 ft: 15496 corp: 14/325b lim: 40 exec/s: 0 rss: 75Mb L: 36/37 MS: 1 ChangeBinInt- 00:06:55.528 [2024-11-26 18:57:12.612652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.528 [2024-11-26 18:57:12.612677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.528 [2024-11-26 18:57:12.612735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.528 [2024-11-26 18:57:12.612749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.528 [2024-11-26 18:57:12.612801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.528 [2024-11-26 18:57:12.612814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.528 #27 NEW cov: 12513 ft: 15564 corp: 15/354b lim: 40 exec/s: 0 rss: 75Mb L: 29/37 MS: 1 EraseBytes- 00:06:55.528 [2024-11-26 18:57:12.652429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a884343 cdw11:43434343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.528 [2024-11-26 18:57:12.652455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.528 #29 NEW cov: 12513 ft: 15586 corp: 16/365b lim: 40 exec/s: 29 rss: 75Mb L: 11/37 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:55.528 [2024-11-26 18:57:12.692540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a884343 cdw11:43434343 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.528 [2024-11-26 18:57:12.692565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.528 #30 NEW cov: 12513 ft: 15608 corp: 17/376b lim: 40 exec/s: 30 rss: 75Mb L: 11/37 MS: 1 ShuffleBytes- 00:06:55.789 [2024-11-26 18:57:12.753266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.753292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.753349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.753363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.753420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0055ddcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.753437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.753485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:84490000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.753502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.789 #31 NEW cov: 12513 ft: 15657 corp: 18/412b lim: 40 exec/s: 31 rss: 75Mb L: 36/37 MS: 1 ShuffleBytes- 00:06:55.789 [2024-11-26 18:57:12.793373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.793398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.793455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.793469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.793527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.793540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.793594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.793608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.789 #32 NEW cov: 12513 ft: 15751 corp: 19/451b lim: 40 exec/s: 32 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:55.789 [2024-11-26 18:57:12.833115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0f6ddd cdw11:55f3b1bd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.833140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.833213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e9cd8449 cdw11:00cd8449 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.833228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.789 #33 NEW cov: 12513 ft: 15790 corp: 20/470b lim: 40 exec/s: 33 rss: 75Mb L: 19/39 MS: 1 ShuffleBytes- 00:06:55.789 [2024-11-26 18:57:12.873780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00f6ffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.873805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.873860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.873874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.873928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.873941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.873996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.874010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.874066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.874083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.789 #34 NEW cov: 12513 ft: 15895 corp: 21/510b lim: 40 exec/s: 34 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:06:55.789 [2024-11-26 18:57:12.933624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.933649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.933707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff6df3bd cdw11:e9cd8449 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.933721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.789 [2024-11-26 18:57:12.933775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.933788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.789 #35 NEW cov: 12513 ft: 15937 corp: 22/537b lim: 40 exec/s: 35 rss: 75Mb L: 27/40 MS: 1 PersAutoDict- DE: "m\363\275\351\315\204I\000"- 00:06:55.789 [2024-11-26 18:57:12.993445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0fb155dd cdw11:cd49005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.789 [2024-11-26 18:57:12.993469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.050 #36 NEW cov: 12513 ft: 15975 corp: 23/545b lim: 40 exec/s: 36 rss: 75Mb L: 8/40 MS: 1 EraseBytes- 00:06:56.050 [2024-11-26 18:57:13.054129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.054154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.054218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000007e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.054233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.054286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0055ddcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.054300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.054353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:84490000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.054367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.050 #37 NEW cov: 12513 ft: 16030 corp: 24/581b lim: 40 exec/s: 37 rss: 75Mb L: 36/40 MS: 1 ShuffleBytes- 00:06:56.050 [2024-11-26 18:57:13.114255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.114280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.114353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000007e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.114367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.114427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0055ddcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.114440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.114497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:84490000 cdw11:00490000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.114511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.050 #38 NEW cov: 12513 ft: 16042 corp: 25/617b lim: 40 exec/s: 38 rss: 75Mb L: 36/40 MS: 1 CopyPart- 00:06:56.050 [2024-11-26 18:57:13.174436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.174460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.174521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.174535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.174588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.174601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.050 [2024-11-26 18:57:13.174655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.050 [2024-11-26 18:57:13.174667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.050 #39 NEW cov: 12513 ft: 16053 corp: 26/656b lim: 40 exec/s: 39 rss: 76Mb L: 39/40 MS: 1 ChangeBinInt- 00:06:56.050 [2024-11-26 18:57:13.234611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.051 [2024-11-26 18:57:13.234636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.051 [2024-11-26 18:57:13.234691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.051 [2024-11-26 18:57:13.234704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.051 [2024-11-26 18:57:13.234758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0055ddcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.051 [2024-11-26 18:57:13.234771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.051 [2024-11-26 18:57:13.234822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:84490000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.051 [2024-11-26 18:57:13.234836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.310 #40 NEW cov: 12513 ft: 16074 corp: 27/692b lim: 40 exec/s: 40 rss: 76Mb L: 36/40 MS: 1 CopyPart- 00:06:56.311 [2024-11-26 18:57:13.294725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.294751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.294809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.294823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.294875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0057ddcd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.294888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.294942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:84490000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.294956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.311 #41 NEW cov: 12513 ft: 16106 corp: 28/728b lim: 40 exec/s: 41 rss: 76Mb L: 36/40 MS: 1 ChangeBit- 00:06:56.311 [2024-11-26 18:57:13.334860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00f6ffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.334885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.334941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.334955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.335011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.335023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.335076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.335090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.311 #42 NEW cov: 12513 ft: 16117 corp: 29/764b lim: 40 exec/s: 42 rss: 76Mb L: 36/40 MS: 1 ShuffleBytes- 00:06:56.311 [2024-11-26 18:57:13.374437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0fb155dd cdw11:cd844900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.374463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.311 #43 NEW cov: 12513 ft: 16146 corp: 30/773b lim: 40 exec/s: 43 rss: 76Mb L: 9/40 MS: 1 PersAutoDict- DE: "\017\261U\335\315\204I\000"- 00:06:56.311 [2024-11-26 18:57:13.415059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00f6ffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.415085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.415141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.415155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.415208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.415221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.415277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.415290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.311 #44 NEW cov: 12513 ft: 16158 corp: 31/809b lim: 40 exec/s: 44 rss: 76Mb L: 36/40 MS: 1 ShuffleBytes- 00:06:56.311 [2024-11-26 18:57:13.475485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.475510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.475563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.475578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.475632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:9f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.475645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.475701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.475715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.475767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00cdcdcd cdw11:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.475781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.311 #45 NEW cov: 12513 ft: 16178 corp: 32/849b lim: 40 exec/s: 45 rss: 76Mb L: 40/40 MS: 1 InsertByte- 00:06:56.311 [2024-11-26 18:57:13.515427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a884343 cdw11:43434300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.515455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.515512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000043 cdw11:43434300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.515526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.515580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.515594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.311 [2024-11-26 18:57:13.515648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000009f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.311 [2024-11-26 18:57:13.515662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.570 #46 NEW cov: 12513 ft: 16188 corp: 33/888b lim: 40 exec/s: 46 rss: 76Mb L: 39/40 MS: 1 CrossOver- 00:06:56.570 [2024-11-26 18:57:13.575094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3b8e8e8e cdw11:8e8e8e08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.570 [2024-11-26 18:57:13.575121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.570 #50 NEW cov: 12513 ft: 16208 corp: 34/896b lim: 40 exec/s: 50 rss: 76Mb L: 8/40 MS: 4 ChangeBit-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:06:56.570 [2024-11-26 18:57:13.615455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.570 [2024-11-26 18:57:13.615487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.570 [2024-11-26 18:57:13.615543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.570 [2024-11-26 18:57:13.615557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.570 [2024-11-26 18:57:13.615610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.570 [2024-11-26 18:57:13.615623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.570 #51 NEW cov: 12513 ft: 16211 corp: 35/927b lim: 40 exec/s: 25 rss: 76Mb L: 31/40 MS: 1 CopyPart- 00:06:56.570 #51 DONE cov: 12513 ft: 16211 corp: 35/927b lim: 40 exec/s: 25 rss: 76Mb 00:06:56.570 ###### Recommended dictionary. ###### 00:06:56.570 "\017\261U\335\315\204I\000" # Uses: 2 00:06:56.570 "m\363\275\351\315\204I\000" # Uses: 1 00:06:56.570 ###### End of recommended dictionary. ###### 00:06:56.571 Done 51 runs in 2 second(s) 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:56.571 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.830 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.830 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.830 18:57:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:56.830 [2024-11-26 18:57:13.812950] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:56.830 [2024-11-26 18:57:13.813018] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741189 ] 00:06:56.830 [2024-11-26 18:57:13.997038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.830 [2024-11-26 18:57:14.035675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.090 [2024-11-26 18:57:14.094942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.090 [2024-11-26 18:57:14.111092] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:57.090 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.090 INFO: Seed: 529393224 00:06:57.090 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:57.090 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:57.090 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:57.090 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.090 #2 INITED exec/s: 0 rss: 67Mb 00:06:57.090 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.090 This may also happen if the target rejected all inputs we tried so far 00:06:57.090 [2024-11-26 18:57:14.156197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ba1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.090 [2024-11-26 18:57:14.156230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.090 [2024-11-26 18:57:14.156263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.090 [2024-11-26 18:57:14.156278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.090 [2024-11-26 18:57:14.156307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.090 [2024-11-26 18:57:14.156322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.090 [2024-11-26 18:57:14.156350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.090 [2024-11-26 18:57:14.156365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.350 NEW_FUNC[1/717]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:57.350 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.350 #10 NEW cov: 12284 ft: 12282 corp: 2/34b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:57.350 [2024-11-26 18:57:14.507171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.350 [2024-11-26 18:57:14.507215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.350 [2024-11-26 18:57:14.507250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.350 [2024-11-26 18:57:14.507267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.350 [2024-11-26 18:57:14.507299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.350 [2024-11-26 18:57:14.507316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.350 [2024-11-26 18:57:14.507352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.350 [2024-11-26 18:57:14.507369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.350 [2024-11-26 18:57:14.507398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.350 [2024-11-26 18:57:14.507414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.350 #12 NEW cov: 12397 ft: 12934 corp: 3/74b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:57.609 [2024-11-26 18:57:14.567140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.609 [2024-11-26 18:57:14.567173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.567206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.567222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.567251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.567267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.567296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.567311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.567339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.567354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.610 #13 NEW cov: 12403 ft: 13221 corp: 4/114b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:06:57.610 [2024-11-26 18:57:14.657406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a720a72 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.657436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.657469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.657490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.657536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.657552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.657581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.657597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.657626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.657646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.610 #19 NEW cov: 12488 ft: 13506 corp: 5/154b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:06:57.610 [2024-11-26 18:57:14.747588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.747630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.747662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.747677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.747705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.747720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.747748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.747762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.610 #20 NEW cov: 12488 ft: 13709 corp: 6/192b lim: 40 exec/s: 0 rss: 74Mb L: 38/40 MS: 1 InsertRepeatedBytes- 00:06:57.610 [2024-11-26 18:57:14.807803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a707272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.807834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.807868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.807884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.807914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.807929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.807958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.807974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.610 [2024-11-26 18:57:14.808004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.610 [2024-11-26 18:57:14.808020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.871 #26 NEW cov: 12488 ft: 13805 corp: 7/232b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:06:57.871 [2024-11-26 18:57:14.867844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:14.867878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:14.867913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:14.867933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:14.867964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:14.867980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.871 #27 NEW cov: 12488 ft: 14226 corp: 8/256b lim: 40 exec/s: 0 rss: 74Mb L: 24/40 MS: 1 EraseBytes- 00:06:57.871 [2024-11-26 18:57:14.928076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ba1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:14.928108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:14.928141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a144a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:14.928157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:14.928187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:14.928203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:14.928233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:14.928248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.871 #28 NEW cov: 12488 ft: 14312 corp: 9/289b lim: 40 exec/s: 0 rss: 75Mb L: 33/40 MS: 1 ChangeByte- 00:06:57.871 [2024-11-26 18:57:15.018305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0ba1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:15.018336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:15.018370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a144a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:15.018387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:15.018416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:15.018432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.871 [2024-11-26 18:57:15.018460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:40a1a1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.871 [2024-11-26 18:57:15.018487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.131 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:58.131 #29 NEW cov: 12505 ft: 14351 corp: 10/322b lim: 40 exec/s: 0 rss: 75Mb L: 33/40 MS: 1 ChangeByte- 00:06:58.131 [2024-11-26 18:57:15.108583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.108613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.108651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.108666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.108694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72ec7272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.108709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.108737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.108751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.108779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.108794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.131 #30 NEW cov: 12505 ft: 14409 corp: 11/362b lim: 40 exec/s: 30 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:06:58.131 [2024-11-26 18:57:15.168651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.168683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.168716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727256 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.168733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.168762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:56565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.168778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.168807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.168822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.131 #31 NEW cov: 12505 ft: 14418 corp: 12/399b lim: 40 exec/s: 31 rss: 75Mb L: 37/40 MS: 1 InsertRepeatedBytes- 00:06:58.131 [2024-11-26 18:57:15.258968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.258998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.259032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727256 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.259048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.259078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:56565656 cdw11:56565625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.259094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.131 [2024-11-26 18:57:15.259128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:56727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.131 [2024-11-26 18:57:15.259144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.131 #32 NEW cov: 12505 ft: 14466 corp: 13/437b lim: 40 exec/s: 32 rss: 75Mb L: 38/40 MS: 1 InsertByte- 00:06:58.392 [2024-11-26 18:57:15.349033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.349063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.349095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.349110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.392 #33 NEW cov: 12505 ft: 14698 corp: 14/460b lim: 40 exec/s: 33 rss: 75Mb L: 23/40 MS: 1 EraseBytes- 00:06:58.392 [2024-11-26 18:57:15.439465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727a72 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.439515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.439550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.439566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.439596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72ec7272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.439611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.439640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.439656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.439685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.439700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.392 #34 NEW cov: 12505 ft: 14766 corp: 15/500b lim: 40 exec/s: 34 rss: 76Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:58.392 [2024-11-26 18:57:15.529677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.529707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.529755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.529771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.529801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.529816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.529850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.529865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.529894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.529910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.392 #35 NEW cov: 12505 ft: 14772 corp: 16/540b lim: 40 exec/s: 35 rss: 76Mb L: 40/40 MS: 1 CrossOver- 00:06:58.392 [2024-11-26 18:57:15.579776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a707272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.579805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.579836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72722d72 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.579851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.579879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.579894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.579921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.579936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.392 [2024-11-26 18:57:15.579964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.392 [2024-11-26 18:57:15.579978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.652 #36 NEW cov: 12505 ft: 14782 corp: 17/580b lim: 40 exec/s: 36 rss: 76Mb L: 40/40 MS: 1 ChangeByte- 00:06:58.652 [2024-11-26 18:57:15.669949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.669980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.652 [2024-11-26 18:57:15.670014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.670031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.652 [2024-11-26 18:57:15.670061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.670077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.652 #37 NEW cov: 12505 ft: 14792 corp: 18/607b lim: 40 exec/s: 37 rss: 76Mb L: 27/40 MS: 1 InsertRepeatedBytes- 00:06:58.652 [2024-11-26 18:57:15.729900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a700a72 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.729931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.652 #38 NEW cov: 12505 ft: 15476 corp: 19/619b lim: 40 exec/s: 38 rss: 76Mb L: 12/40 MS: 1 CrossOver- 00:06:58.652 [2024-11-26 18:57:15.790372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a707272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.790403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.652 [2024-11-26 18:57:15.790437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72722d72 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.790454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.652 [2024-11-26 18:57:15.790491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.790507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.652 [2024-11-26 18:57:15.790537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.790552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.652 [2024-11-26 18:57:15.790581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.652 [2024-11-26 18:57:15.790606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.652 #39 NEW cov: 12505 ft: 15488 corp: 20/659b lim: 40 exec/s: 39 rss: 76Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:58.911 [2024-11-26 18:57:15.880468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:15.880505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:15.880538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:15.880553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:15.880581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:15.880597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.911 #40 NEW cov: 12505 ft: 15510 corp: 21/683b lim: 40 exec/s: 40 rss: 76Mb L: 24/40 MS: 1 CrossOver- 00:06:58.911 [2024-11-26 18:57:15.930676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:15.930706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:15.930739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:72727256 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:15.930756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:15.930785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:56565656 cdw11:56565625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:15.930801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:15.930835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:56727272 cdw11:72727272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:15.930851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.911 #41 NEW cov: 12505 ft: 15532 corp: 22/721b lim: 40 exec/s: 41 rss: 76Mb L: 38/40 MS: 1 ShuffleBytes- 00:06:58.911 [2024-11-26 18:57:16.020943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.020975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:16.021009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.021025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:16.021055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.021070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:16.021099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.021115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.911 #42 NEW cov: 12512 ft: 15603 corp: 23/759b lim: 40 exec/s: 42 rss: 76Mb L: 38/40 MS: 1 ChangeByte- 00:06:58.911 [2024-11-26 18:57:16.081065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.081096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:16.081129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00007e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.081146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:16.081175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.081191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.911 [2024-11-26 18:57:16.081220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.911 [2024-11-26 18:57:16.081235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.170 #43 NEW cov: 12512 ft: 15678 corp: 24/797b lim: 40 exec/s: 21 rss: 76Mb L: 38/40 MS: 1 ChangeByte- 00:06:59.170 #43 DONE cov: 12512 ft: 15678 corp: 24/797b lim: 40 exec/s: 21 rss: 76Mb 00:06:59.170 Done 43 runs in 2 second(s) 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.170 18:57:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:59.170 [2024-11-26 18:57:16.308104] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:59.170 [2024-11-26 18:57:16.308172] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741450 ] 00:06:59.429 [2024-11-26 18:57:16.499427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.429 [2024-11-26 18:57:16.537777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.429 [2024-11-26 18:57:16.597143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.429 [2024-11-26 18:57:16.613284] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:59.429 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.429 INFO: Seed: 3031394838 00:06:59.689 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:06:59.689 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:06:59.689 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:59.689 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.689 #2 INITED exec/s: 0 rss: 67Mb 00:06:59.689 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.689 This may also happen if the target rejected all inputs we tried so far 00:06:59.689 [2024-11-26 18:57:16.658847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.689 [2024-11-26 18:57:16.658876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.689 [2024-11-26 18:57:16.658932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.689 [2024-11-26 18:57:16.658946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.949 NEW_FUNC[1/716]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:59.949 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.949 #12 NEW cov: 12267 ft: 12271 corp: 2/23b lim: 40 exec/s: 0 rss: 75Mb L: 22/22 MS: 5 ChangeByte-InsertByte-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:59.949 [2024-11-26 18:57:16.979763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:16.979797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.949 [2024-11-26 18:57:16.979855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:16.979870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.949 #13 NEW cov: 12385 ft: 12860 corp: 3/45b lim: 40 exec/s: 0 rss: 75Mb L: 22/22 MS: 1 ShuffleBytes- 00:06:59.949 [2024-11-26 18:57:17.039838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:17.039863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.949 [2024-11-26 18:57:17.039923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:17.039936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.949 #14 NEW cov: 12391 ft: 13160 corp: 4/67b lim: 40 exec/s: 0 rss: 75Mb L: 22/22 MS: 1 ShuffleBytes- 00:06:59.949 [2024-11-26 18:57:17.079907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:17.079932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.949 [2024-11-26 18:57:17.079988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:17.080002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.949 #15 NEW cov: 12476 ft: 13415 corp: 5/89b lim: 40 exec/s: 0 rss: 75Mb L: 22/22 MS: 1 ShuffleBytes- 00:06:59.949 [2024-11-26 18:57:17.140216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:17.140240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.949 [2024-11-26 18:57:17.140297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:17.140311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.949 [2024-11-26 18:57:17.140369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11119e9e cdw11:9e111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.949 [2024-11-26 18:57:17.140382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.209 #21 NEW cov: 12476 ft: 13840 corp: 6/114b lim: 40 exec/s: 0 rss: 75Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:00.209 [2024-11-26 18:57:17.180185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.180210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.180271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.180286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.209 #22 NEW cov: 12476 ft: 13942 corp: 7/131b lim: 40 exec/s: 0 rss: 75Mb L: 17/25 MS: 1 EraseBytes- 00:07:00.209 [2024-11-26 18:57:17.240505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.240530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.240589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:119e9e11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.240603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.240659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:9e111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.240673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.209 #23 NEW cov: 12476 ft: 13995 corp: 8/156b lim: 40 exec/s: 0 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:00.209 [2024-11-26 18:57:17.300681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.300705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.300765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.300778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.300837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11119e9e cdw11:9e111178 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.300851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.209 #29 NEW cov: 12476 ft: 14045 corp: 9/182b lim: 40 exec/s: 0 rss: 75Mb L: 26/26 MS: 1 InsertByte- 00:07:00.209 [2024-11-26 18:57:17.340961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.340987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.341047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.341060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.341120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.341133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.341192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.341209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.209 #30 NEW cov: 12476 ft: 14532 corp: 10/218b lim: 40 exec/s: 0 rss: 75Mb L: 36/36 MS: 1 CopyPart- 00:07:00.209 [2024-11-26 18:57:17.380745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30651111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.380770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.209 [2024-11-26 18:57:17.380829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.209 [2024-11-26 18:57:17.380843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.209 #31 NEW cov: 12476 ft: 14640 corp: 11/240b lim: 40 exec/s: 0 rss: 75Mb L: 22/36 MS: 1 ChangeBinInt- 00:07:00.469 [2024-11-26 18:57:17.421041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.421068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.421129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:119e9e11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.421144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.421201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:119e1111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.421216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.469 #32 NEW cov: 12476 ft: 14670 corp: 12/269b lim: 40 exec/s: 0 rss: 75Mb L: 29/36 MS: 1 CrossOver- 00:07:00.469 [2024-11-26 18:57:17.481199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:32681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.481224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.481284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.481297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.481356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11119e9e cdw11:9e111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.481370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.469 #33 NEW cov: 12476 ft: 14705 corp: 13/294b lim: 40 exec/s: 0 rss: 75Mb L: 25/36 MS: 1 ChangeBit- 00:07:00.469 [2024-11-26 18:57:17.521291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.521317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.521374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.521388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.521447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11119e9e cdw11:9e11116f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.521461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.469 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:00.469 #34 NEW cov: 12499 ft: 14735 corp: 14/320b lim: 40 exec/s: 0 rss: 75Mb L: 26/36 MS: 1 ChangeBinInt- 00:07:00.469 [2024-11-26 18:57:17.581320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30651111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.581345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.581404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:91111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.581418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.469 #35 NEW cov: 12499 ft: 14794 corp: 15/342b lim: 40 exec/s: 0 rss: 75Mb L: 22/36 MS: 1 ChangeBit- 00:07:00.469 [2024-11-26 18:57:17.641757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.641782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.641840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.641854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.641909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.641923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.469 [2024-11-26 18:57:17.641978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.469 [2024-11-26 18:57:17.641991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.729 #36 NEW cov: 12499 ft: 14851 corp: 16/377b lim: 40 exec/s: 36 rss: 75Mb L: 35/36 MS: 1 CopyPart- 00:07:00.729 [2024-11-26 18:57:17.701675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.701701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.729 [2024-11-26 18:57:17.701759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.701773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.729 #40 NEW cov: 12499 ft: 14863 corp: 17/398b lim: 40 exec/s: 40 rss: 75Mb L: 21/36 MS: 4 ChangeBinInt-ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:00.729 [2024-11-26 18:57:17.741759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.741785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.729 [2024-11-26 18:57:17.741842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.741860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.729 #41 NEW cov: 12499 ft: 14876 corp: 18/419b lim: 40 exec/s: 41 rss: 76Mb L: 21/36 MS: 1 ChangeByte- 00:07:00.729 [2024-11-26 18:57:17.802079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.802105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.729 [2024-11-26 18:57:17.802166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.802180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.729 [2024-11-26 18:57:17.802237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11119e9e cdw11:9e11116f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.802251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.729 #42 NEW cov: 12499 ft: 14890 corp: 19/445b lim: 40 exec/s: 42 rss: 76Mb L: 26/36 MS: 1 ShuffleBytes- 00:07:00.729 [2024-11-26 18:57:17.862241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.862267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.729 [2024-11-26 18:57:17.862328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:119e9e11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.862342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.729 [2024-11-26 18:57:17.862399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11119e9e cdw11:9e11116f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.862413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.729 #43 NEW cov: 12499 ft: 14924 corp: 20/471b lim: 40 exec/s: 43 rss: 76Mb L: 26/36 MS: 1 CrossOver- 00:07:00.729 [2024-11-26 18:57:17.922297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30684011 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.922322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.729 [2024-11-26 18:57:17.922381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.729 [2024-11-26 18:57:17.922395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.989 #44 NEW cov: 12499 ft: 14939 corp: 21/494b lim: 40 exec/s: 44 rss: 76Mb L: 23/36 MS: 1 InsertByte- 00:07:00.989 [2024-11-26 18:57:17.962386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:17.962411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:17.962477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:17.962496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.989 #45 NEW cov: 12499 ft: 14956 corp: 22/512b lim: 40 exec/s: 45 rss: 76Mb L: 18/36 MS: 1 CopyPart- 00:07:00.989 [2024-11-26 18:57:18.022701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11119111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.022725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.022784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:119e9e11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.022798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.022855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:9e111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.022868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.989 #46 NEW cov: 12499 ft: 14964 corp: 23/537b lim: 40 exec/s: 46 rss: 76Mb L: 25/36 MS: 1 ChangeBit- 00:07:00.989 [2024-11-26 18:57:18.062781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.062806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.062876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.062890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.062947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11119e9e cdw11:9e111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.062960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.989 #47 NEW cov: 12499 ft: 14975 corp: 24/562b lim: 40 exec/s: 47 rss: 76Mb L: 25/36 MS: 1 ShuffleBytes- 00:07:00.989 [2024-11-26 18:57:18.102766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30651168 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.102791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.102852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.102866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.989 #48 NEW cov: 12499 ft: 14988 corp: 25/584b lim: 40 exec/s: 48 rss: 76Mb L: 22/36 MS: 1 ChangeByte- 00:07:00.989 [2024-11-26 18:57:18.142967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30651111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.142991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.143066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.143079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.143136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.143153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.989 #49 NEW cov: 12499 ft: 15007 corp: 26/611b lim: 40 exec/s: 49 rss: 76Mb L: 27/36 MS: 1 CrossOver- 00:07:00.989 [2024-11-26 18:57:18.182986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.183010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 [2024-11-26 18:57:18.183069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11efeeee cdw11:eeeeeeee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.989 [2024-11-26 18:57:18.183083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.249 #50 NEW cov: 12499 ft: 15072 corp: 27/633b lim: 40 exec/s: 50 rss: 76Mb L: 22/36 MS: 1 ChangeBinInt- 00:07:01.249 [2024-11-26 18:57:18.223246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.223271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.249 [2024-11-26 18:57:18.223331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:119e9e11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.223345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.249 [2024-11-26 18:57:18.223404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:6f111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.223419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.249 #51 NEW cov: 12499 ft: 15098 corp: 28/658b lim: 40 exec/s: 51 rss: 76Mb L: 25/36 MS: 1 CrossOver- 00:07:01.249 [2024-11-26 18:57:18.263221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.263245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.249 [2024-11-26 18:57:18.263305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.263319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.249 #52 NEW cov: 12499 ft: 15111 corp: 29/675b lim: 40 exec/s: 52 rss: 76Mb L: 17/36 MS: 1 ShuffleBytes- 00:07:01.249 [2024-11-26 18:57:18.303329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.303353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.249 [2024-11-26 18:57:18.303413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:119e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.303426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.249 #53 NEW cov: 12499 ft: 15163 corp: 30/695b lim: 40 exec/s: 53 rss: 76Mb L: 20/36 MS: 1 EraseBytes- 00:07:01.249 [2024-11-26 18:57:18.343641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11119111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.249 [2024-11-26 18:57:18.343669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.249 [2024-11-26 18:57:18.343728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11de9e11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.250 [2024-11-26 18:57:18.343742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.250 [2024-11-26 18:57:18.343800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:9e111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.250 [2024-11-26 18:57:18.343813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.250 #54 NEW cov: 12499 ft: 15176 corp: 31/720b lim: 40 exec/s: 54 rss: 76Mb L: 25/36 MS: 1 ChangeBit- 00:07:01.250 [2024-11-26 18:57:18.403711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30651111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.250 [2024-11-26 18:57:18.403737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.250 [2024-11-26 18:57:18.403797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.250 [2024-11-26 18:57:18.403811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.250 #55 NEW cov: 12499 ft: 15234 corp: 32/741b lim: 40 exec/s: 55 rss: 76Mb L: 21/36 MS: 1 EraseBytes- 00:07:01.250 [2024-11-26 18:57:18.443629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.250 [2024-11-26 18:57:18.443653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.510 #56 NEW cov: 12499 ft: 15568 corp: 33/752b lim: 40 exec/s: 56 rss: 77Mb L: 11/36 MS: 1 EraseBytes- 00:07:01.510 [2024-11-26 18:57:18.503953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681115 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.503978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.510 [2024-11-26 18:57:18.504038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.504052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.510 #57 NEW cov: 12499 ft: 15646 corp: 34/774b lim: 40 exec/s: 57 rss: 77Mb L: 22/36 MS: 1 ChangeBinInt- 00:07:01.510 [2024-11-26 18:57:18.544038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.544062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.510 [2024-11-26 18:57:18.544123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff04e3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.544137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.510 #58 NEW cov: 12499 ft: 15648 corp: 35/790b lim: 40 exec/s: 58 rss: 77Mb L: 16/36 MS: 1 EraseBytes- 00:07:01.510 [2024-11-26 18:57:18.584168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:306511d1 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.584195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.510 [2024-11-26 18:57:18.584257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.584271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.510 #59 NEW cov: 12499 ft: 15696 corp: 36/812b lim: 40 exec/s: 59 rss: 77Mb L: 22/36 MS: 1 InsertByte- 00:07:01.510 [2024-11-26 18:57:18.644490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30681111 cdw11:11651111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.644514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.510 [2024-11-26 18:57:18.644588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:119e9e11 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.644602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.510 [2024-11-26 18:57:18.644660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:9e111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.510 [2024-11-26 18:57:18.644674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.510 #60 NEW cov: 12499 ft: 15725 corp: 37/837b lim: 40 exec/s: 30 rss: 77Mb L: 25/36 MS: 1 ChangeByte- 00:07:01.510 #60 DONE cov: 12499 ft: 15725 corp: 37/837b lim: 40 exec/s: 30 rss: 77Mb 00:07:01.510 Done 60 runs in 2 second(s) 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.771 18:57:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:01.771 [2024-11-26 18:57:18.820137] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:01.771 [2024-11-26 18:57:18.820206] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741752 ] 00:07:02.031 [2024-11-26 18:57:19.012167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.031 [2024-11-26 18:57:19.051215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.031 [2024-11-26 18:57:19.110440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.031 [2024-11-26 18:57:19.126614] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:02.031 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.031 INFO: Seed: 1249452510 00:07:02.031 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:02.031 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:02.031 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:02.031 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.031 #2 INITED exec/s: 0 rss: 67Mb 00:07:02.031 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.031 This may also happen if the target rejected all inputs we tried so far 00:07:02.031 [2024-11-26 18:57:19.194981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000085 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.031 [2024-11-26 18:57:19.195018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.031 [2024-11-26 18:57:19.195113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.031 [2024-11-26 18:57:19.195131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.031 [2024-11-26 18:57:19.195223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.031 [2024-11-26 18:57:19.195241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.031 [2024-11-26 18:57:19.195339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.031 [2024-11-26 18:57:19.195354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.597 NEW_FUNC[1/717]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:02.597 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.597 #16 NEW cov: 12266 ft: 12267 corp: 2/29b lim: 35 exec/s: 0 rss: 74Mb L: 28/28 MS: 4 ChangeBinInt-CopyPart-InsertByte-InsertRepeatedBytes- 00:07:02.597 [2024-11-26 18:57:19.535509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:4 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.535554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.535657] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.535677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.535773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.535794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.597 NEW_FUNC[1/1]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:02.597 #21 NEW cov: 12420 ft: 13255 corp: 3/56b lim: 35 exec/s: 0 rss: 74Mb L: 27/28 MS: 5 ChangeBinInt-ChangeBinInt-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:07:02.597 [2024-11-26 18:57:19.595743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.595775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.595868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.595885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.595986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.596004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.597 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:02.597 #24 NEW cov: 12436 ft: 13407 corp: 4/79b lim: 35 exec/s: 0 rss: 74Mb L: 23/28 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:02.597 [2024-11-26 18:57:19.646008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.646036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.646134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.646157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.646254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.646273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.597 #25 NEW cov: 12521 ft: 13651 corp: 5/105b lim: 35 exec/s: 0 rss: 74Mb L: 26/28 MS: 1 CrossOver- 00:07:02.597 [2024-11-26 18:57:19.696396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.696423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.696526] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.696543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.696640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.696658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.597 #26 NEW cov: 12521 ft: 13782 corp: 6/128b lim: 35 exec/s: 0 rss: 74Mb L: 23/28 MS: 1 ShuffleBytes- 00:07:02.597 [2024-11-26 18:57:19.767169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:4 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.767197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.767284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.767304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.767399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.767420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.597 [2024-11-26 18:57:19.767518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.597 [2024-11-26 18:57:19.767540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.856 #27 NEW cov: 12521 ft: 13897 corp: 7/156b lim: 35 exec/s: 0 rss: 75Mb L: 28/28 MS: 1 InsertByte- 00:07:02.856 [2024-11-26 18:57:19.836710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.836737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.856 [2024-11-26 18:57:19.836832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.836849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.856 #28 NEW cov: 12521 ft: 14166 corp: 8/171b lim: 35 exec/s: 0 rss: 75Mb L: 15/28 MS: 1 EraseBytes- 00:07:02.856 [2024-11-26 18:57:19.907091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.907119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.856 [2024-11-26 18:57:19.907214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.907231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.856 #29 NEW cov: 12521 ft: 14206 corp: 9/186b lim: 35 exec/s: 0 rss: 75Mb L: 15/28 MS: 1 EraseBytes- 00:07:02.856 [2024-11-26 18:57:19.978618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:4 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.978649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.856 [2024-11-26 18:57:19.978753] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.978770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.856 [2024-11-26 18:57:19.978869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.978888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.856 [2024-11-26 18:57:19.978986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.979005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.856 [2024-11-26 18:57:19.979105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:19.979127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.856 #30 NEW cov: 12521 ft: 14278 corp: 10/221b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:02.856 [2024-11-26 18:57:20.048073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:20.048103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.856 [2024-11-26 18:57:20.048207] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.856 [2024-11-26 18:57:20.048226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.115 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:03.115 #31 NEW cov: 12544 ft: 14354 corp: 11/236b lim: 35 exec/s: 0 rss: 75Mb L: 15/35 MS: 1 ChangeByte- 00:07:03.115 [2024-11-26 18:57:20.128499] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.128526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.128642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.128658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.115 #32 NEW cov: 12544 ft: 14395 corp: 12/252b lim: 35 exec/s: 32 rss: 75Mb L: 16/35 MS: 1 InsertByte- 00:07:03.115 [2024-11-26 18:57:20.199391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.199419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.199521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.199539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.199637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.199658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.115 #33 NEW cov: 12544 ft: 14438 corp: 13/278b lim: 35 exec/s: 33 rss: 75Mb L: 26/35 MS: 1 CrossOver- 00:07:03.115 [2024-11-26 18:57:20.250027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.250056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.250157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.250176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.250282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.250300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.250409] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.250427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.115 #34 NEW cov: 12544 ft: 14482 corp: 14/308b lim: 35 exec/s: 34 rss: 75Mb L: 30/35 MS: 1 CopyPart- 00:07:03.115 [2024-11-26 18:57:20.310161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.310200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.310299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.310325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.115 [2024-11-26 18:57:20.310434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.115 [2024-11-26 18:57:20.310458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.374 #35 NEW cov: 12544 ft: 14509 corp: 15/334b lim: 35 exec/s: 35 rss: 75Mb L: 26/35 MS: 1 ChangeBit- 00:07:03.374 [2024-11-26 18:57:20.391321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:4 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.374 [2024-11-26 18:57:20.391350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.374 [2024-11-26 18:57:20.391460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.374 [2024-11-26 18:57:20.391483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.374 [2024-11-26 18:57:20.391587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.374 [2024-11-26 18:57:20.391607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.374 [2024-11-26 18:57:20.391702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.374 [2024-11-26 18:57:20.391722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.374 [2024-11-26 18:57:20.391823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.374 [2024-11-26 18:57:20.391842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.374 #36 NEW cov: 12544 ft: 14519 corp: 16/369b lim: 35 exec/s: 36 rss: 75Mb L: 35/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:03.374 [2024-11-26 18:57:20.470912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.374 [2024-11-26 18:57:20.470944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.374 [2024-11-26 18:57:20.471042] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.374 [2024-11-26 18:57:20.471065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.374 [2024-11-26 18:57:20.471168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.375 [2024-11-26 18:57:20.471191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.375 #37 NEW cov: 12544 ft: 14552 corp: 17/395b lim: 35 exec/s: 37 rss: 75Mb L: 26/35 MS: 1 ChangeByte- 00:07:03.375 [2024-11-26 18:57:20.520531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.375 [2024-11-26 18:57:20.520561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.375 #40 NEW cov: 12544 ft: 15214 corp: 18/404b lim: 35 exec/s: 40 rss: 75Mb L: 9/35 MS: 3 CopyPart-ShuffleBytes-CrossOver- 00:07:03.375 [2024-11-26 18:57:20.571291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.375 [2024-11-26 18:57:20.571319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.375 [2024-11-26 18:57:20.571415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.375 [2024-11-26 18:57:20.571435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.633 #42 NEW cov: 12544 ft: 15229 corp: 19/420b lim: 35 exec/s: 42 rss: 75Mb L: 16/35 MS: 2 InsertByte-CrossOver- 00:07:03.633 [2024-11-26 18:57:20.622468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.633 [2024-11-26 18:57:20.622501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.633 [2024-11-26 18:57:20.622596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.633 [2024-11-26 18:57:20.622613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.633 [2024-11-26 18:57:20.622719] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.633 [2024-11-26 18:57:20.622752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.633 [2024-11-26 18:57:20.622853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.633 [2024-11-26 18:57:20.622872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.633 #43 NEW cov: 12544 ft: 15247 corp: 20/450b lim: 35 exec/s: 43 rss: 75Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:03.633 [2024-11-26 18:57:20.692645] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.633 [2024-11-26 18:57:20.692672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.692771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.692790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.692906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.692944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.634 #44 NEW cov: 12544 ft: 15284 corp: 21/476b lim: 35 exec/s: 44 rss: 75Mb L: 26/35 MS: 1 CrossOver- 00:07:03.634 [2024-11-26 18:57:20.763475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.763518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.763621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.763639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.763739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.763755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.763849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.763867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.634 #45 NEW cov: 12544 ft: 15302 corp: 22/506b lim: 35 exec/s: 45 rss: 75Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:03.634 [2024-11-26 18:57:20.814021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:4 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.814051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.814145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.814162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.814263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.814282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.814382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.814402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.634 [2024-11-26 18:57:20.814491] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.634 [2024-11-26 18:57:20.814512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.634 #46 NEW cov: 12544 ft: 15346 corp: 23/541b lim: 35 exec/s: 46 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:03.893 [2024-11-26 18:57:20.864148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.864179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:20.864280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.864297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:20.864394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.864412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:20.864524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.864548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.893 #47 NEW cov: 12544 ft: 15384 corp: 24/571b lim: 35 exec/s: 47 rss: 75Mb L: 30/35 MS: 1 ShuffleBytes- 00:07:03.893 [2024-11-26 18:57:20.935039] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:4 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.935068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:20.935163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.935181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:20.935272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.935294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:20.935393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.935413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:20.935529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:20.935545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.893 #48 NEW cov: 12544 ft: 15429 corp: 25/606b lim: 35 exec/s: 48 rss: 76Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:03.893 [2024-11-26 18:57:21.004260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:21.004290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:21.004392] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:21.004411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.893 #49 NEW cov: 12544 ft: 15480 corp: 26/622b lim: 35 exec/s: 49 rss: 76Mb L: 16/35 MS: 1 EraseBytes- 00:07:03.893 [2024-11-26 18:57:21.055935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:4 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:21.055967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:21.056007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:5 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:21.056026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:21.056127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:21.056148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:21.056253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:21.056275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.893 [2024-11-26 18:57:21.056377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.893 [2024-11-26 18:57:21.056395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.893 #50 NEW cov: 12544 ft: 15524 corp: 27/657b lim: 35 exec/s: 50 rss: 76Mb L: 35/35 MS: 1 ChangeBit- 00:07:04.153 [2024-11-26 18:57:21.124942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.153 [2024-11-26 18:57:21.124972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.153 #51 NEW cov: 12544 ft: 15543 corp: 28/666b lim: 35 exec/s: 51 rss: 76Mb L: 9/35 MS: 1 ChangeBit- 00:07:04.153 [2024-11-26 18:57:21.196178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.153 [2024-11-26 18:57:21.196205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.153 [2024-11-26 18:57:21.196300] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.153 [2024-11-26 18:57:21.196317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.153 [2024-11-26 18:57:21.196415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.153 [2024-11-26 18:57:21.196432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.153 #52 NEW cov: 12544 ft: 15552 corp: 29/690b lim: 35 exec/s: 26 rss: 76Mb L: 24/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:04.153 #52 DONE cov: 12544 ft: 15552 corp: 29/690b lim: 35 exec/s: 26 rss: 76Mb 00:07:04.153 ###### Recommended dictionary. ###### 00:07:04.153 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:04.153 ###### End of recommended dictionary. ###### 00:07:04.153 Done 52 runs in 2 second(s) 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:04.153 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.412 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.412 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.412 18:57:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:04.412 [2024-11-26 18:57:21.394940] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:04.412 [2024-11-26 18:57:21.395007] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742114 ] 00:07:04.412 [2024-11-26 18:57:21.588450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.671 [2024-11-26 18:57:21.627705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.671 [2024-11-26 18:57:21.687060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.671 [2024-11-26 18:57:21.703207] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:04.671 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.671 INFO: Seed: 3825441775 00:07:04.671 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:04.671 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:04.671 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:04.671 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.671 #2 INITED exec/s: 0 rss: 67Mb 00:07:04.671 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.671 This may also happen if the target rejected all inputs we tried so far 00:07:04.671 [2024-11-26 18:57:21.771313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.671 [2024-11-26 18:57:21.771351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.671 [2024-11-26 18:57:21.771455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.671 [2024-11-26 18:57:21.771476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.671 [2024-11-26 18:57:21.771576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.671 [2024-11-26 18:57:21.771593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.930 NEW_FUNC[1/716]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:04.930 NEW_FUNC[2/716]: 0x46fc18 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:07:04.930 #4 NEW cov: 12267 ft: 12268 corp: 2/30b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:04.930 [2024-11-26 18:57:22.112308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.930 [2024-11-26 18:57:22.112359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.930 [2024-11-26 18:57:22.112467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.930 [2024-11-26 18:57:22.112492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.930 [2024-11-26 18:57:22.112595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.930 [2024-11-26 18:57:22.112620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.189 NEW_FUNC[1/1]: 0x19b6158 in nvme_get_transport /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:56 00:07:05.189 #5 NEW cov: 12389 ft: 12872 corp: 3/59b lim: 35 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 ShuffleBytes- 00:07:05.189 [2024-11-26 18:57:22.192833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.192864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.192964] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.192980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.193080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.193099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.189 #6 NEW cov: 12395 ft: 13076 corp: 4/92b lim: 35 exec/s: 0 rss: 75Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:05.189 [2024-11-26 18:57:22.263020] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.263048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.263147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.263165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.263263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.263281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.189 #7 NEW cov: 12480 ft: 13407 corp: 5/121b lim: 35 exec/s: 0 rss: 75Mb L: 29/33 MS: 1 ChangeBit- 00:07:05.189 [2024-11-26 18:57:22.313162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.313189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.313285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.313300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.313408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.313425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.313522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.313540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.189 #13 NEW cov: 12480 ft: 13635 corp: 6/153b lim: 35 exec/s: 0 rss: 75Mb L: 32/33 MS: 1 InsertRepeatedBytes- 00:07:05.189 [2024-11-26 18:57:22.383593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.383624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.383732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.383747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.189 [2024-11-26 18:57:22.383840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.189 [2024-11-26 18:57:22.383855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.448 #14 NEW cov: 12480 ft: 13718 corp: 7/186b lim: 35 exec/s: 0 rss: 75Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:05.448 [2024-11-26 18:57:22.433799] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.448 [2024-11-26 18:57:22.433825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.448 [2024-11-26 18:57:22.434034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.448 [2024-11-26 18:57:22.434050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.448 #20 NEW cov: 12480 ft: 13980 corp: 8/215b lim: 35 exec/s: 0 rss: 75Mb L: 29/33 MS: 1 CrossOver- 00:07:05.448 [2024-11-26 18:57:22.484003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.448 [2024-11-26 18:57:22.484028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.448 [2024-11-26 18:57:22.484124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.448 [2024-11-26 18:57:22.484139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.448 [2024-11-26 18:57:22.484244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.484261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.449 #21 NEW cov: 12480 ft: 14025 corp: 9/244b lim: 35 exec/s: 0 rss: 75Mb L: 29/33 MS: 1 ChangeBinInt- 00:07:05.449 [2024-11-26 18:57:22.534288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.534313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.449 [2024-11-26 18:57:22.534413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000628 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.534430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.449 [2024-11-26 18:57:22.534531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.534547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.449 #22 NEW cov: 12480 ft: 14047 corp: 10/273b lim: 35 exec/s: 0 rss: 75Mb L: 29/33 MS: 1 ChangeBit- 00:07:05.449 [2024-11-26 18:57:22.604345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.604371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.449 [2024-11-26 18:57:22.604484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.604500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.449 [2024-11-26 18:57:22.604598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.604615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.449 #23 NEW cov: 12480 ft: 14078 corp: 11/302b lim: 35 exec/s: 0 rss: 75Mb L: 29/33 MS: 1 CMP- DE: "\000\002\000\000"- 00:07:05.449 [2024-11-26 18:57:22.654806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.654833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.449 [2024-11-26 18:57:22.654940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.449 [2024-11-26 18:57:22.654956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.708 NEW_FUNC[1/2]: 0x46ba68 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:07:05.708 NEW_FUNC[2/2]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:05.708 #24 NEW cov: 12526 ft: 14175 corp: 12/331b lim: 35 exec/s: 0 rss: 75Mb L: 29/33 MS: 1 PersAutoDict- DE: "\000\002\000\000"- 00:07:05.708 [2024-11-26 18:57:22.704942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.708 [2024-11-26 18:57:22.704970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.708 [2024-11-26 18:57:22.705072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.708 [2024-11-26 18:57:22.705088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.705194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.705210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.709 #25 NEW cov: 12526 ft: 14226 corp: 13/360b lim: 35 exec/s: 25 rss: 75Mb L: 29/33 MS: 1 ChangeByte- 00:07:05.709 [2024-11-26 18:57:22.775130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.775157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.775266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.775285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.775386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.775405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.709 #26 NEW cov: 12526 ft: 14249 corp: 14/394b lim: 35 exec/s: 26 rss: 75Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:05.709 [2024-11-26 18:57:22.825217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.825247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.825345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.825363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.825466] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.825487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.825588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.825604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.709 #27 NEW cov: 12526 ft: 14321 corp: 15/426b lim: 35 exec/s: 27 rss: 75Mb L: 32/34 MS: 1 ChangeByte- 00:07:05.709 [2024-11-26 18:57:22.895807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.895834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.895928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.895945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.709 [2024-11-26 18:57:22.896038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.709 [2024-11-26 18:57:22.896056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.968 #28 NEW cov: 12526 ft: 14355 corp: 16/455b lim: 35 exec/s: 28 rss: 75Mb L: 29/34 MS: 1 PersAutoDict- DE: "\000\002\000\000"- 00:07:05.969 [2024-11-26 18:57:22.966240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:22.966268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:22.966361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000628 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:22.966379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:22.966483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:22.966498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.969 #29 NEW cov: 12526 ft: 14389 corp: 17/484b lim: 35 exec/s: 29 rss: 75Mb L: 29/34 MS: 1 ChangeBinInt- 00:07:05.969 [2024-11-26 18:57:23.036365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.036394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:23.036486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.036505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:23.036602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.036620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:23.036718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.036735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.969 #30 NEW cov: 12526 ft: 14464 corp: 18/516b lim: 35 exec/s: 30 rss: 75Mb L: 32/34 MS: 1 ShuffleBytes- 00:07:05.969 [2024-11-26 18:57:23.106830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.106856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:23.106950] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000628 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.106966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:23.107058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.107075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.969 #31 NEW cov: 12526 ft: 14493 corp: 19/547b lim: 35 exec/s: 31 rss: 75Mb L: 31/34 MS: 1 CMP- DE: "\001\000"- 00:07:05.969 [2024-11-26 18:57:23.176899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.176927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.969 [2024-11-26 18:57:23.177037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.969 [2024-11-26 18:57:23.177055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.228 #32 NEW cov: 12526 ft: 14881 corp: 20/572b lim: 35 exec/s: 32 rss: 75Mb L: 25/34 MS: 1 EraseBytes- 00:07:06.228 [2024-11-26 18:57:23.247497] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.247526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.247633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.247650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.247742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.247758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.228 #33 NEW cov: 12526 ft: 14917 corp: 21/605b lim: 35 exec/s: 33 rss: 75Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:06.228 [2024-11-26 18:57:23.317650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.317678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.317771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.317790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.317891] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.317908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.228 #34 NEW cov: 12526 ft: 14922 corp: 22/634b lim: 35 exec/s: 34 rss: 75Mb L: 29/34 MS: 1 ShuffleBytes- 00:07:06.228 [2024-11-26 18:57:23.367894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.367920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.368024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.368043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.368142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.368158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.228 #35 NEW cov: 12526 ft: 14933 corp: 23/663b lim: 35 exec/s: 35 rss: 75Mb L: 29/34 MS: 1 CopyPart- 00:07:06.228 [2024-11-26 18:57:23.418398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.418424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.418530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.418550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.418644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.418661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.228 [2024-11-26 18:57:23.418762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.228 [2024-11-26 18:57:23.418781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.488 #36 NEW cov: 12526 ft: 14955 corp: 24/698b lim: 35 exec/s: 36 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:06.488 [2024-11-26 18:57:23.468046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.468071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.488 [2024-11-26 18:57:23.468171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.468188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.488 #37 NEW cov: 12526 ft: 14979 corp: 25/720b lim: 35 exec/s: 37 rss: 75Mb L: 22/35 MS: 1 EraseBytes- 00:07:06.488 [2024-11-26 18:57:23.518499] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.518526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.488 [2024-11-26 18:57:23.518635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.518652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.488 [2024-11-26 18:57:23.518746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.518764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.488 #38 NEW cov: 12526 ft: 14996 corp: 26/749b lim: 35 exec/s: 38 rss: 75Mb L: 29/35 MS: 1 ChangeByte- 00:07:06.488 [2024-11-26 18:57:23.588707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.588733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.488 [2024-11-26 18:57:23.588840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.588856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.488 [2024-11-26 18:57:23.588958] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.488 [2024-11-26 18:57:23.588975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.488 #39 NEW cov: 12526 ft: 15007 corp: 27/778b lim: 35 exec/s: 39 rss: 76Mb L: 29/35 MS: 1 ChangeBinInt- 00:07:06.489 [2024-11-26 18:57:23.658995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.489 [2024-11-26 18:57:23.659021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.489 [2024-11-26 18:57:23.659124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.489 [2024-11-26 18:57:23.659141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.489 [2024-11-26 18:57:23.659237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.489 [2024-11-26 18:57:23.659257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.748 #40 NEW cov: 12526 ft: 15032 corp: 28/812b lim: 35 exec/s: 40 rss: 76Mb L: 34/35 MS: 1 CopyPart- 00:07:06.748 [2024-11-26 18:57:23.729167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.748 [2024-11-26 18:57:23.729194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.748 [2024-11-26 18:57:23.729298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.748 [2024-11-26 18:57:23.729313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.748 [2024-11-26 18:57:23.729410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.748 [2024-11-26 18:57:23.729427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.748 #41 NEW cov: 12526 ft: 15075 corp: 29/841b lim: 35 exec/s: 20 rss: 76Mb L: 29/35 MS: 1 CopyPart- 00:07:06.748 #41 DONE cov: 12526 ft: 15075 corp: 29/841b lim: 35 exec/s: 20 rss: 76Mb 00:07:06.748 ###### Recommended dictionary. ###### 00:07:06.748 "\000\002\000\000" # Uses: 2 00:07:06.748 "\001\000" # Uses: 0 00:07:06.748 ###### End of recommended dictionary. ###### 00:07:06.748 Done 41 runs in 2 second(s) 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.748 18:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:06.748 [2024-11-26 18:57:23.901672] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:06.748 [2024-11-26 18:57:23.901739] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742467 ] 00:07:07.007 [2024-11-26 18:57:24.092476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.007 [2024-11-26 18:57:24.130413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.007 [2024-11-26 18:57:24.190022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.007 [2024-11-26 18:57:24.206156] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:07.266 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.266 INFO: Seed: 2034457100 00:07:07.266 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:07.266 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:07.266 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:07.266 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.266 #2 INITED exec/s: 0 rss: 68Mb 00:07:07.266 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.266 This may also happen if the target rejected all inputs we tried so far 00:07:07.266 [2024-11-26 18:57:24.283643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.266 [2024-11-26 18:57:24.283682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.266 [2024-11-26 18:57:24.283772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.266 [2024-11-26 18:57:24.283788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.266 [2024-11-26 18:57:24.283875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.266 [2024-11-26 18:57:24.283895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.266 [2024-11-26 18:57:24.283987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.266 [2024-11-26 18:57:24.284006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.524 NEW_FUNC[1/717]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:07.524 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:07.524 #13 NEW cov: 12340 ft: 12341 corp: 2/95b lim: 105 exec/s: 0 rss: 74Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:07:07.524 [2024-11-26 18:57:24.644482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.524 [2024-11-26 18:57:24.644531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.524 [2024-11-26 18:57:24.644611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.524 [2024-11-26 18:57:24.644631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.524 [2024-11-26 18:57:24.644720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.524 [2024-11-26 18:57:24.644741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.524 #15 NEW cov: 12470 ft: 13515 corp: 3/162b lim: 105 exec/s: 0 rss: 74Mb L: 67/94 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:07.524 [2024-11-26 18:57:24.705074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.524 [2024-11-26 18:57:24.705108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.524 [2024-11-26 18:57:24.705172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.524 [2024-11-26 18:57:24.705192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.524 [2024-11-26 18:57:24.705232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.524 [2024-11-26 18:57:24.705249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.524 [2024-11-26 18:57:24.705341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.524 [2024-11-26 18:57:24.705359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.524 #20 NEW cov: 12476 ft: 13776 corp: 4/259b lim: 105 exec/s: 0 rss: 74Mb L: 97/97 MS: 5 InsertRepeatedBytes-ShuffleBytes-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:07.784 [2024-11-26 18:57:24.755197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.755227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.755299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.755318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.755382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.755405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.784 #21 NEW cov: 12561 ft: 14046 corp: 5/327b lim: 105 exec/s: 0 rss: 75Mb L: 68/97 MS: 1 InsertByte- 00:07:07.784 [2024-11-26 18:57:24.825804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.825831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.825898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.825918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.825985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.826003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.826101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.826120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.784 #22 NEW cov: 12561 ft: 14127 corp: 6/419b lim: 105 exec/s: 0 rss: 75Mb L: 92/97 MS: 1 InsertRepeatedBytes- 00:07:07.784 [2024-11-26 18:57:24.875990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.876018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.876101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.876119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.876203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744070203113471 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.876221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.876317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.876334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.784 #23 NEW cov: 12561 ft: 14204 corp: 7/516b lim: 105 exec/s: 0 rss: 75Mb L: 97/97 MS: 1 ChangeByte- 00:07:07.784 [2024-11-26 18:57:24.946294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.946322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.946377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.946397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.784 [2024-11-26 18:57:24.946495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.784 [2024-11-26 18:57:24.946513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.784 #24 NEW cov: 12561 ft: 14277 corp: 8/584b lim: 105 exec/s: 0 rss: 75Mb L: 68/97 MS: 1 CrossOver- 00:07:08.044 [2024-11-26 18:57:25.016895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.016925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.017005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16717361816395048935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.017027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.017085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16717361816395048935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.017103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.017192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.017210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.044 #25 NEW cov: 12561 ft: 14344 corp: 9/688b lim: 105 exec/s: 0 rss: 75Mb L: 104/104 MS: 1 CopyPart- 00:07:08.044 [2024-11-26 18:57:25.086916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.086945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.087009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.087029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.087106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.087126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.044 #26 NEW cov: 12561 ft: 14390 corp: 10/756b lim: 105 exec/s: 0 rss: 75Mb L: 68/104 MS: 1 CrossOver- 00:07:08.044 [2024-11-26 18:57:25.157338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.157368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.157427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.157446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.157508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.157526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.044 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:08.044 #27 NEW cov: 12584 ft: 14513 corp: 11/823b lim: 105 exec/s: 0 rss: 75Mb L: 67/104 MS: 1 EraseBytes- 00:07:08.044 [2024-11-26 18:57:25.227646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.227674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.227747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.227781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.044 [2024-11-26 18:57:25.227857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.044 [2024-11-26 18:57:25.227874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.044 #28 NEW cov: 12584 ft: 14539 corp: 12/890b lim: 105 exec/s: 28 rss: 75Mb L: 67/104 MS: 1 ChangeBit- 00:07:08.303 [2024-11-26 18:57:25.278062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.303 [2024-11-26 18:57:25.278091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.303 [2024-11-26 18:57:25.278172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.303 [2024-11-26 18:57:25.278190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.303 [2024-11-26 18:57:25.278259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.303 [2024-11-26 18:57:25.278279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.303 [2024-11-26 18:57:25.278379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744071199943274 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.303 [2024-11-26 18:57:25.278398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.303 #29 NEW cov: 12584 ft: 14634 corp: 13/994b lim: 105 exec/s: 29 rss: 75Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:07:08.303 [2024-11-26 18:57:25.328607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.303 [2024-11-26 18:57:25.328639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.303 [2024-11-26 18:57:25.328699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069421006847 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.303 [2024-11-26 18:57:25.328718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.303 [2024-11-26 18:57:25.328789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744070203113471 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.328808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.328896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.328916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.304 #30 NEW cov: 12584 ft: 14655 corp: 14/1091b lim: 105 exec/s: 30 rss: 75Mb L: 97/104 MS: 1 ChangeBinInt- 00:07:08.304 [2024-11-26 18:57:25.398745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.398775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.398843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.398861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.398945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073696706559 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.398962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.304 #31 NEW cov: 12584 ft: 14698 corp: 15/1159b lim: 105 exec/s: 31 rss: 75Mb L: 68/104 MS: 1 ChangeByte- 00:07:08.304 [2024-11-26 18:57:25.449096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.449126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.449212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.449232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.449302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.449320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.304 #32 NEW cov: 12584 ft: 14712 corp: 16/1227b lim: 105 exec/s: 32 rss: 75Mb L: 68/104 MS: 1 ChangeBinInt- 00:07:08.304 [2024-11-26 18:57:25.499715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.499745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.499835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069421006847 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.499853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.499917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18445618173014179839 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.499935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.304 [2024-11-26 18:57:25.500029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.304 [2024-11-26 18:57:25.500050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.563 #33 NEW cov: 12584 ft: 14720 corp: 17/1324b lim: 105 exec/s: 33 rss: 75Mb L: 97/104 MS: 1 ChangeBinInt- 00:07:08.563 [2024-11-26 18:57:25.570184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.570215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.570291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16717361816395048935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.570312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.570376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16717361816395048935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.570397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.570493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.570514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.563 #34 NEW cov: 12584 ft: 14740 corp: 18/1423b lim: 105 exec/s: 34 rss: 75Mb L: 99/104 MS: 1 EraseBytes- 00:07:08.563 [2024-11-26 18:57:25.640170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.640200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.640269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65408 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.640293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.640365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.640386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.563 #35 NEW cov: 12584 ft: 14767 corp: 19/1491b lim: 105 exec/s: 35 rss: 75Mb L: 68/104 MS: 1 ChangeBit- 00:07:08.563 [2024-11-26 18:57:25.690725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.690756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.690832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.690850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.690909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.690929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.691022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.691041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.563 #36 NEW cov: 12584 ft: 14792 corp: 20/1585b lim: 105 exec/s: 36 rss: 75Mb L: 94/104 MS: 1 CopyPart- 00:07:08.563 [2024-11-26 18:57:25.740528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.563 [2024-11-26 18:57:25.740559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.563 [2024-11-26 18:57:25.740634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.564 [2024-11-26 18:57:25.740656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.564 [2024-11-26 18:57:25.740721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710580029079158759 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.564 [2024-11-26 18:57:25.740740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.823 #37 NEW cov: 12584 ft: 14817 corp: 21/1653b lim: 105 exec/s: 37 rss: 75Mb L: 68/104 MS: 1 CrossOver- 00:07:08.823 [2024-11-26 18:57:25.811553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.811583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.811683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16717361816395048935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.811701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.811788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710606416953993191 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.811804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.811892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.811913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.812002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.812025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.823 #38 NEW cov: 12584 ft: 14871 corp: 22/1758b lim: 105 exec/s: 38 rss: 75Mb L: 105/105 MS: 1 CopyPart- 00:07:08.823 [2024-11-26 18:57:25.861098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.861129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.861210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.861228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.861314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.861335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.823 #39 NEW cov: 12584 ft: 14919 corp: 23/1841b lim: 105 exec/s: 39 rss: 75Mb L: 83/105 MS: 1 CrossOver- 00:07:08.823 [2024-11-26 18:57:25.911736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.911763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.911843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.911861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.911940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.911959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.912053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.912073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.823 #40 NEW cov: 12584 ft: 14923 corp: 24/1936b lim: 105 exec/s: 40 rss: 76Mb L: 95/105 MS: 1 InsertByte- 00:07:08.823 [2024-11-26 18:57:25.981985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.982012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.982080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65408 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.982099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.823 [2024-11-26 18:57:25.982179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.823 [2024-11-26 18:57:25.982199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.823 #41 NEW cov: 12584 ft: 14931 corp: 25/2004b lim: 105 exec/s: 41 rss: 76Mb L: 68/105 MS: 1 CrossOver- 00:07:09.083 [2024-11-26 18:57:26.052711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.052740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.052836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16717361816395048935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.052858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.052920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16717361816395048935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.052940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.053024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.053044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.053129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.053147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:09.083 #42 NEW cov: 12584 ft: 14979 corp: 26/2109b lim: 105 exec/s: 42 rss: 76Mb L: 105/105 MS: 1 CrossOver- 00:07:09.083 [2024-11-26 18:57:26.102615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.102645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.102719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.102737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.102824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.102844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.083 #43 NEW cov: 12584 ft: 15040 corp: 27/2177b lim: 105 exec/s: 43 rss: 76Mb L: 68/105 MS: 1 ShuffleBytes- 00:07:09.083 [2024-11-26 18:57:26.153213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.153240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.153340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.153359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.153437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.153458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.153539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:72057589742960640 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.153557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.083 #44 NEW cov: 12584 ft: 15107 corp: 28/2278b lim: 105 exec/s: 44 rss: 76Mb L: 101/105 MS: 1 InsertRepeatedBytes- 00:07:09.083 [2024-11-26 18:57:26.223396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.223427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.223507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.223525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.223605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:64512 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.223623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.223712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.223731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.083 #45 NEW cov: 12584 ft: 15118 corp: 29/2370b lim: 105 exec/s: 45 rss: 76Mb L: 92/105 MS: 1 ChangeBit- 00:07:09.083 [2024-11-26 18:57:26.273296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.273324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.273400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.273419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.083 [2024-11-26 18:57:26.273496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744071199943274 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.083 [2024-11-26 18:57:26.273516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.343 #46 NEW cov: 12584 ft: 15139 corp: 30/2453b lim: 105 exec/s: 23 rss: 76Mb L: 83/105 MS: 1 EraseBytes- 00:07:09.343 #46 DONE cov: 12584 ft: 15139 corp: 30/2453b lim: 105 exec/s: 23 rss: 76Mb 00:07:09.343 Done 46 runs in 2 second(s) 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.343 18:57:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:09.344 [2024-11-26 18:57:26.467820] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:09.344 [2024-11-26 18:57:26.467890] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742828 ] 00:07:09.603 [2024-11-26 18:57:26.655195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.603 [2024-11-26 18:57:26.693240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.603 [2024-11-26 18:57:26.752216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.603 [2024-11-26 18:57:26.768358] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:09.603 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.603 INFO: Seed: 301494742 00:07:09.603 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:09.603 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:09.603 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:09.603 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.603 #2 INITED exec/s: 0 rss: 67Mb 00:07:09.603 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.603 This may also happen if the target rejected all inputs we tried so far 00:07:09.862 [2024-11-26 18:57:26.823821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.862 [2024-11-26 18:57:26.823855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.862 [2024-11-26 18:57:26.823924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.862 [2024-11-26 18:57:26.823946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.122 NEW_FUNC[1/718]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:10.122 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.122 #6 NEW cov: 12379 ft: 12368 corp: 2/58b lim: 120 exec/s: 0 rss: 74Mb L: 57/57 MS: 4 CMP-CrossOver-ShuffleBytes-InsertRepeatedBytes- DE: "\002\000\000\000"- 00:07:10.122 [2024-11-26 18:57:27.154794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.122 [2024-11-26 18:57:27.154835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.122 [2024-11-26 18:57:27.154904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.122 [2024-11-26 18:57:27.154927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.122 #17 NEW cov: 12492 ft: 12947 corp: 3/115b lim: 120 exec/s: 0 rss: 74Mb L: 57/57 MS: 1 ChangeByte- 00:07:10.122 [2024-11-26 18:57:27.215050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.122 [2024-11-26 18:57:27.215079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.122 [2024-11-26 18:57:27.215145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.122 [2024-11-26 18:57:27.215167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.122 [2024-11-26 18:57:27.215234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2814753674684648 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.122 [2024-11-26 18:57:27.215258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.122 #18 NEW cov: 12498 ft: 13623 corp: 4/205b lim: 120 exec/s: 0 rss: 74Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:10.122 [2024-11-26 18:57:27.274846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.122 [2024-11-26 18:57:27.274878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.122 #19 NEW cov: 12583 ft: 14687 corp: 5/252b lim: 120 exec/s: 0 rss: 74Mb L: 47/90 MS: 1 CrossOver- 00:07:10.122 [2024-11-26 18:57:27.314962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.122 [2024-11-26 18:57:27.314992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.380 #20 NEW cov: 12583 ft: 14780 corp: 6/299b lim: 120 exec/s: 0 rss: 75Mb L: 47/90 MS: 1 ChangeBinInt- 00:07:10.380 [2024-11-26 18:57:27.375289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.375317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.380 [2024-11-26 18:57:27.375385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.375406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.380 #21 NEW cov: 12583 ft: 14821 corp: 7/366b lim: 120 exec/s: 0 rss: 75Mb L: 67/90 MS: 1 CrossOver- 00:07:10.380 [2024-11-26 18:57:27.415239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.415269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.380 #22 NEW cov: 12583 ft: 14876 corp: 8/401b lim: 120 exec/s: 0 rss: 75Mb L: 35/90 MS: 1 EraseBytes- 00:07:10.380 [2024-11-26 18:57:27.455497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.455527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.380 [2024-11-26 18:57:27.455598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1664054674374266647 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.455621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.380 #23 NEW cov: 12583 ft: 14924 corp: 9/464b lim: 120 exec/s: 0 rss: 75Mb L: 63/90 MS: 1 InsertRepeatedBytes- 00:07:10.380 [2024-11-26 18:57:27.515660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.515688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.380 [2024-11-26 18:57:27.515756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.515777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.380 #24 NEW cov: 12583 ft: 15052 corp: 10/521b lim: 120 exec/s: 0 rss: 75Mb L: 57/90 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:07:10.380 [2024-11-26 18:57:27.556098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.556128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.380 [2024-11-26 18:57:27.556188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.556210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.380 [2024-11-26 18:57:27.556278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098821177320 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.556299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.380 [2024-11-26 18:57:27.556367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.380 [2024-11-26 18:57:27.556387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.639 #25 NEW cov: 12583 ft: 15485 corp: 11/619b lim: 120 exec/s: 0 rss: 75Mb L: 98/98 MS: 1 CMP- DE: "\377\377\377\377\376\377\377\377"- 00:07:10.639 [2024-11-26 18:57:27.615928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.615958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.639 [2024-11-26 18:57:27.616023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446718686157864703 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.616046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.639 #26 NEW cov: 12583 ft: 15491 corp: 12/684b lim: 120 exec/s: 0 rss: 75Mb L: 65/98 MS: 1 PersAutoDict- DE: "\377\377\377\377\376\377\377\377"- 00:07:10.639 [2024-11-26 18:57:27.656368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.656397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.639 [2024-11-26 18:57:27.656456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.656483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.639 [2024-11-26 18:57:27.656549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098821177320 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.656577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.639 [2024-11-26 18:57:27.656642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.656662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.639 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:10.639 #27 NEW cov: 12600 ft: 15536 corp: 13/782b lim: 120 exec/s: 0 rss: 75Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:10.639 [2024-11-26 18:57:27.716216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.716244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.639 [2024-11-26 18:57:27.716312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.716334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.639 #28 NEW cov: 12600 ft: 15586 corp: 14/839b lim: 120 exec/s: 0 rss: 75Mb L: 57/98 MS: 1 ShuffleBytes- 00:07:10.639 [2024-11-26 18:57:27.756336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.756366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.639 [2024-11-26 18:57:27.756433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446718686157864703 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.756456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.639 #29 NEW cov: 12600 ft: 15610 corp: 15/904b lim: 120 exec/s: 0 rss: 75Mb L: 65/98 MS: 1 ChangeBit- 00:07:10.639 [2024-11-26 18:57:27.816504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.816533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.639 [2024-11-26 18:57:27.816598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.639 [2024-11-26 18:57:27.816621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.639 #30 NEW cov: 12600 ft: 15682 corp: 16/971b lim: 120 exec/s: 30 rss: 75Mb L: 67/98 MS: 1 CopyPart- 00:07:10.898 [2024-11-26 18:57:27.856617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.856646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:27.856713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.856735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.899 #31 NEW cov: 12600 ft: 15783 corp: 17/1028b lim: 120 exec/s: 31 rss: 75Mb L: 57/98 MS: 1 ChangeBinInt- 00:07:10.899 [2024-11-26 18:57:27.917072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.917104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:27.917165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.917186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:27.917252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098821177320 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.917272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:27.917338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2965102661431011625 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.917357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.899 #32 NEW cov: 12600 ft: 15832 corp: 18/1126b lim: 120 exec/s: 32 rss: 75Mb L: 98/98 MS: 1 ChangeBinInt- 00:07:10.899 [2024-11-26 18:57:27.977274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.977304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:27.977367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.977390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:27.977457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098821177320 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.977486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:27.977567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:27.977588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.899 #33 NEW cov: 12600 ft: 15869 corp: 19/1224b lim: 120 exec/s: 33 rss: 75Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:10.899 [2024-11-26 18:57:28.017085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:28.017113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:28.017180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:102691770114 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:28.017202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.899 #34 NEW cov: 12600 ft: 15890 corp: 20/1275b lim: 120 exec/s: 34 rss: 75Mb L: 51/98 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:07:10.899 [2024-11-26 18:57:28.057350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:28.057379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:28.057443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:28.057469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.899 [2024-11-26 18:57:28.057544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073322168319 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.899 [2024-11-26 18:57:28.057568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.899 #35 NEW cov: 12600 ft: 15904 corp: 21/1350b lim: 120 exec/s: 35 rss: 75Mb L: 75/98 MS: 1 InsertRepeatedBytes- 00:07:11.158 [2024-11-26 18:57:28.117504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.117534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.117597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.117618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.117685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16721586144380774632 len:5912 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.117705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.158 #36 NEW cov: 12600 ft: 15927 corp: 22/1444b lim: 120 exec/s: 36 rss: 75Mb L: 94/98 MS: 1 CrossOver- 00:07:11.158 [2024-11-26 18:57:28.177695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.177725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.177785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.177807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.177876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.177897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.158 #37 NEW cov: 12600 ft: 15941 corp: 23/1523b lim: 120 exec/s: 37 rss: 75Mb L: 79/98 MS: 1 CrossOver- 00:07:11.158 [2024-11-26 18:57:28.238045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.238074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.238132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.238154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.238221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3070836804741352 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.238243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.238329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:16782920098433788136 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.238349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.158 #38 NEW cov: 12600 ft: 15977 corp: 24/1638b lim: 120 exec/s: 38 rss: 75Mb L: 115/115 MS: 1 CrossOver- 00:07:11.158 [2024-11-26 18:57:28.278164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.278193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.278253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.278275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.278342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098433788136 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.278364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.278431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2860009729821184 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.278452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.158 #39 NEW cov: 12600 ft: 16013 corp: 25/1749b lim: 120 exec/s: 39 rss: 75Mb L: 111/115 MS: 1 CrossOver- 00:07:11.158 [2024-11-26 18:57:28.338311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.338342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.338399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446718686157864703 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.338420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.338491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098433788136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.338514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.158 [2024-11-26 18:57:28.338583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.158 [2024-11-26 18:57:28.338602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.158 #40 NEW cov: 12600 ft: 16019 corp: 26/1850b lim: 120 exec/s: 40 rss: 75Mb L: 101/115 MS: 1 CrossOver- 00:07:11.418 [2024-11-26 18:57:28.378277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.418 [2024-11-26 18:57:28.378308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.418 [2024-11-26 18:57:28.378374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.418 [2024-11-26 18:57:28.378397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.418 [2024-11-26 18:57:28.378477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.418 [2024-11-26 18:57:28.378498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.418 #41 NEW cov: 12600 ft: 16038 corp: 27/1929b lim: 120 exec/s: 41 rss: 76Mb L: 79/115 MS: 1 CrossOver- 00:07:11.418 [2024-11-26 18:57:28.438273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.418 [2024-11-26 18:57:28.438303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.418 [2024-11-26 18:57:28.438375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.418 [2024-11-26 18:57:28.438398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.418 #42 NEW cov: 12600 ft: 16051 corp: 28/1996b lim: 120 exec/s: 42 rss: 76Mb L: 67/115 MS: 1 CopyPart- 00:07:11.418 [2024-11-26 18:57:28.478379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.418 [2024-11-26 18:57:28.478409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.418 [2024-11-26 18:57:28.478487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446743974421987326 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.418 [2024-11-26 18:57:28.478512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.418 #43 NEW cov: 12600 ft: 16056 corp: 29/2062b lim: 120 exec/s: 43 rss: 76Mb L: 66/115 MS: 1 InsertByte- 00:07:11.418 [2024-11-26 18:57:28.518489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:43241 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.419 [2024-11-26 18:57:28.518518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.419 [2024-11-26 18:57:28.518586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1664054674374266647 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.419 [2024-11-26 18:57:28.518610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.419 #44 NEW cov: 12600 ft: 16084 corp: 30/2125b lim: 120 exec/s: 44 rss: 76Mb L: 63/115 MS: 1 ChangeBit- 00:07:11.419 [2024-11-26 18:57:28.578517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.419 [2024-11-26 18:57:28.578547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.419 #45 NEW cov: 12600 ft: 16092 corp: 31/2163b lim: 120 exec/s: 45 rss: 76Mb L: 38/115 MS: 1 EraseBytes- 00:07:11.419 [2024-11-26 18:57:28.618798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:5912 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.419 [2024-11-26 18:57:28.618828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.419 [2024-11-26 18:57:28.618894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446743974421987326 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.419 [2024-11-26 18:57:28.618916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.678 #46 NEW cov: 12600 ft: 16117 corp: 32/2229b lim: 120 exec/s: 46 rss: 76Mb L: 66/115 MS: 1 ChangeBinInt- 00:07:11.678 [2024-11-26 18:57:28.679107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.679136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.678 [2024-11-26 18:57:28.679198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.679219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.678 [2024-11-26 18:57:28.679287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16782920098821177320 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.679306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.678 #47 NEW cov: 12607 ft: 16133 corp: 33/2317b lim: 120 exec/s: 47 rss: 76Mb L: 88/115 MS: 1 EraseBytes- 00:07:11.678 [2024-11-26 18:57:28.719183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094563821800 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.719212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.678 [2024-11-26 18:57:28.719279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.719301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.678 [2024-11-26 18:57:28.719368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2814753674684648 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.719388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.678 #48 NEW cov: 12607 ft: 16142 corp: 34/2411b lim: 120 exec/s: 48 rss: 76Mb L: 94/115 MS: 1 CrossOver- 00:07:11.678 [2024-11-26 18:57:28.759135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094575028456 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.759163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.678 [2024-11-26 18:57:28.759229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.678 [2024-11-26 18:57:28.759252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.679 #49 NEW cov: 12607 ft: 16203 corp: 35/2468b lim: 120 exec/s: 49 rss: 76Mb L: 57/115 MS: 1 PersAutoDict- DE: "\377\377\377\377\376\377\377\377"- 00:07:11.679 [2024-11-26 18:57:28.799527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16782920094576535784 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.679 [2024-11-26 18:57:28.799556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.679 [2024-11-26 18:57:28.799630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16782920098433788136 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.679 [2024-11-26 18:57:28.799652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.679 [2024-11-26 18:57:28.799720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16717373812255549672 len:59625 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.679 [2024-11-26 18:57:28.799746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.679 [2024-11-26 18:57:28.799815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:16782920098433788136 len:59648 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.679 [2024-11-26 18:57:28.799834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.679 #50 NEW cov: 12607 ft: 16241 corp: 36/2584b lim: 120 exec/s: 25 rss: 76Mb L: 116/116 MS: 1 InsertByte- 00:07:11.679 #50 DONE cov: 12607 ft: 16241 corp: 36/2584b lim: 120 exec/s: 25 rss: 76Mb 00:07:11.679 ###### Recommended dictionary. ###### 00:07:11.679 "\002\000\000\000" # Uses: 2 00:07:11.679 "\377\377\377\377\376\377\377\377" # Uses: 2 00:07:11.679 ###### End of recommended dictionary. ###### 00:07:11.679 Done 50 runs in 2 second(s) 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.938 18:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:11.938 [2024-11-26 18:57:28.998209] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:11.938 [2024-11-26 18:57:28.998277] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743183 ] 00:07:12.197 [2024-11-26 18:57:29.188916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.197 [2024-11-26 18:57:29.227016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.197 [2024-11-26 18:57:29.286239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.197 [2024-11-26 18:57:29.302386] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:12.197 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.197 INFO: Seed: 2835494591 00:07:12.197 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:12.197 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:12.197 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:12.197 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.197 #2 INITED exec/s: 0 rss: 67Mb 00:07:12.197 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.197 This may also happen if the target rejected all inputs we tried so far 00:07:12.197 [2024-11-26 18:57:29.357668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.197 [2024-11-26 18:57:29.357698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.456 NEW_FUNC[1/716]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:12.456 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.456 #20 NEW cov: 12320 ft: 12308 corp: 2/37b lim: 100 exec/s: 0 rss: 74Mb L: 36/36 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:12.715 [2024-11-26 18:57:29.679003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.715 [2024-11-26 18:57:29.679053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.715 [2024-11-26 18:57:29.679129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.715 [2024-11-26 18:57:29.679156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.715 [2024-11-26 18:57:29.679234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.716 [2024-11-26 18:57:29.679261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.679339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.716 [2024-11-26 18:57:29.679367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.716 #22 NEW cov: 12435 ft: 13327 corp: 3/135b lim: 100 exec/s: 0 rss: 75Mb L: 98/98 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:12.716 [2024-11-26 18:57:29.718930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.716 [2024-11-26 18:57:29.718959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.719013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.716 [2024-11-26 18:57:29.719032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.719097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.716 [2024-11-26 18:57:29.719117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.719183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.716 [2024-11-26 18:57:29.719200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.716 #23 NEW cov: 12441 ft: 13569 corp: 4/234b lim: 100 exec/s: 0 rss: 75Mb L: 99/99 MS: 1 InsertByte- 00:07:12.716 [2024-11-26 18:57:29.779088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.716 [2024-11-26 18:57:29.779117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.779175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.716 [2024-11-26 18:57:29.779199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.779264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.716 [2024-11-26 18:57:29.779286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.779352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.716 [2024-11-26 18:57:29.779370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.716 #29 NEW cov: 12526 ft: 13923 corp: 5/333b lim: 100 exec/s: 0 rss: 75Mb L: 99/99 MS: 1 ChangeBinInt- 00:07:12.716 [2024-11-26 18:57:29.838893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.716 [2024-11-26 18:57:29.838922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.716 #30 NEW cov: 12526 ft: 14077 corp: 6/369b lim: 100 exec/s: 0 rss: 75Mb L: 36/99 MS: 1 CopyPart- 00:07:12.716 [2024-11-26 18:57:29.899418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.716 [2024-11-26 18:57:29.899445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.899509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.716 [2024-11-26 18:57:29.899529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.899593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.716 [2024-11-26 18:57:29.899616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.716 [2024-11-26 18:57:29.899679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.716 [2024-11-26 18:57:29.899698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.975 #31 NEW cov: 12526 ft: 14158 corp: 7/450b lim: 100 exec/s: 0 rss: 75Mb L: 81/99 MS: 1 CrossOver- 00:07:12.975 [2024-11-26 18:57:29.959433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.975 [2024-11-26 18:57:29.959460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.975 [2024-11-26 18:57:29.959536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.975 [2024-11-26 18:57:29.959556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.975 [2024-11-26 18:57:29.959624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.975 [2024-11-26 18:57:29.959644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.975 #32 NEW cov: 12526 ft: 14496 corp: 8/519b lim: 100 exec/s: 0 rss: 75Mb L: 69/99 MS: 1 InsertRepeatedBytes- 00:07:12.975 [2024-11-26 18:57:29.999315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.975 [2024-11-26 18:57:29.999341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.975 #33 NEW cov: 12526 ft: 14577 corp: 9/555b lim: 100 exec/s: 0 rss: 75Mb L: 36/99 MS: 1 ChangeBit- 00:07:12.975 [2024-11-26 18:57:30.039476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.975 [2024-11-26 18:57:30.039507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.975 #34 NEW cov: 12526 ft: 14645 corp: 10/591b lim: 100 exec/s: 0 rss: 75Mb L: 36/99 MS: 1 CMP- DE: "\376\377\377\365"- 00:07:12.975 [2024-11-26 18:57:30.099662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.975 [2024-11-26 18:57:30.099695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.975 #35 NEW cov: 12526 ft: 14679 corp: 11/630b lim: 100 exec/s: 0 rss: 75Mb L: 39/99 MS: 1 CrossOver- 00:07:12.975 [2024-11-26 18:57:30.160172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.976 [2024-11-26 18:57:30.160202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.976 [2024-11-26 18:57:30.160261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.976 [2024-11-26 18:57:30.160282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.976 [2024-11-26 18:57:30.160347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.976 [2024-11-26 18:57:30.160368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.976 [2024-11-26 18:57:30.160435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.976 [2024-11-26 18:57:30.160456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.976 #36 NEW cov: 12526 ft: 14715 corp: 12/729b lim: 100 exec/s: 0 rss: 75Mb L: 99/99 MS: 1 ChangeByte- 00:07:13.235 [2024-11-26 18:57:30.199932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.235 [2024-11-26 18:57:30.199960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.235 #37 NEW cov: 12526 ft: 14818 corp: 13/765b lim: 100 exec/s: 0 rss: 75Mb L: 36/99 MS: 1 ChangeBinInt- 00:07:13.235 [2024-11-26 18:57:30.240420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.235 [2024-11-26 18:57:30.240448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.235 [2024-11-26 18:57:30.240511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.235 [2024-11-26 18:57:30.240531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.235 [2024-11-26 18:57:30.240598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.235 [2024-11-26 18:57:30.240617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.235 [2024-11-26 18:57:30.240684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.235 [2024-11-26 18:57:30.240705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.235 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:13.235 #38 NEW cov: 12549 ft: 14895 corp: 14/864b lim: 100 exec/s: 0 rss: 75Mb L: 99/99 MS: 1 ChangeByte- 00:07:13.235 [2024-11-26 18:57:30.300573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.235 [2024-11-26 18:57:30.300601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.235 [2024-11-26 18:57:30.300656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.235 [2024-11-26 18:57:30.300677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.235 [2024-11-26 18:57:30.300745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.235 [2024-11-26 18:57:30.300768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.235 [2024-11-26 18:57:30.300831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.235 [2024-11-26 18:57:30.300850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.235 #39 NEW cov: 12549 ft: 14952 corp: 15/963b lim: 100 exec/s: 39 rss: 75Mb L: 99/99 MS: 1 ShuffleBytes- 00:07:13.235 [2024-11-26 18:57:30.360381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.235 [2024-11-26 18:57:30.360408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.235 #40 NEW cov: 12549 ft: 14982 corp: 16/986b lim: 100 exec/s: 40 rss: 75Mb L: 23/99 MS: 1 EraseBytes- 00:07:13.235 [2024-11-26 18:57:30.400749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.235 [2024-11-26 18:57:30.400776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.236 [2024-11-26 18:57:30.400834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.236 [2024-11-26 18:57:30.400854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.236 [2024-11-26 18:57:30.400919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.236 [2024-11-26 18:57:30.400940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.236 #41 NEW cov: 12549 ft: 14996 corp: 17/1055b lim: 100 exec/s: 41 rss: 75Mb L: 69/99 MS: 1 ChangeBit- 00:07:13.236 [2024-11-26 18:57:30.440963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.236 [2024-11-26 18:57:30.440990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.236 [2024-11-26 18:57:30.441048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.236 [2024-11-26 18:57:30.441069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.236 [2024-11-26 18:57:30.441134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.236 [2024-11-26 18:57:30.441155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.236 [2024-11-26 18:57:30.441220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.236 [2024-11-26 18:57:30.441238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.495 #42 NEW cov: 12549 ft: 15003 corp: 18/1153b lim: 100 exec/s: 42 rss: 75Mb L: 98/99 MS: 1 ShuffleBytes- 00:07:13.495 [2024-11-26 18:57:30.481069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.495 [2024-11-26 18:57:30.481096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.495 [2024-11-26 18:57:30.481152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.495 [2024-11-26 18:57:30.481171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.495 [2024-11-26 18:57:30.481236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.495 [2024-11-26 18:57:30.481259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.495 [2024-11-26 18:57:30.481323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.496 [2024-11-26 18:57:30.481341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.496 #43 NEW cov: 12549 ft: 15113 corp: 19/1243b lim: 100 exec/s: 43 rss: 75Mb L: 90/99 MS: 1 EraseBytes- 00:07:13.496 [2024-11-26 18:57:30.521199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.496 [2024-11-26 18:57:30.521226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.521286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.496 [2024-11-26 18:57:30.521307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.521371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.496 [2024-11-26 18:57:30.521391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.521476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.496 [2024-11-26 18:57:30.521495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.496 #44 NEW cov: 12549 ft: 15142 corp: 20/1342b lim: 100 exec/s: 44 rss: 76Mb L: 99/99 MS: 1 ChangeBinInt- 00:07:13.496 [2024-11-26 18:57:30.580988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.496 [2024-11-26 18:57:30.581015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.496 #45 NEW cov: 12549 ft: 15150 corp: 21/1377b lim: 100 exec/s: 45 rss: 76Mb L: 35/99 MS: 1 InsertRepeatedBytes- 00:07:13.496 [2024-11-26 18:57:30.641512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.496 [2024-11-26 18:57:30.641539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.641592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.496 [2024-11-26 18:57:30.641612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.641677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.496 [2024-11-26 18:57:30.641698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.641763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.496 [2024-11-26 18:57:30.641782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.496 #46 NEW cov: 12549 ft: 15157 corp: 22/1476b lim: 100 exec/s: 46 rss: 76Mb L: 99/99 MS: 1 ShuffleBytes- 00:07:13.496 [2024-11-26 18:57:30.681632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.496 [2024-11-26 18:57:30.681660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.681717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.496 [2024-11-26 18:57:30.681737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.681801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.496 [2024-11-26 18:57:30.681824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.496 [2024-11-26 18:57:30.681888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.496 [2024-11-26 18:57:30.681906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.496 #47 NEW cov: 12549 ft: 15163 corp: 23/1575b lim: 100 exec/s: 47 rss: 76Mb L: 99/99 MS: 1 CopyPart- 00:07:13.755 [2024-11-26 18:57:30.721879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.755 [2024-11-26 18:57:30.721907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.721961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.755 [2024-11-26 18:57:30.721981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.722045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.755 [2024-11-26 18:57:30.722066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.722129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.755 [2024-11-26 18:57:30.722147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.722213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:13.755 [2024-11-26 18:57:30.722232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:13.755 #48 NEW cov: 12549 ft: 15261 corp: 24/1675b lim: 100 exec/s: 48 rss: 76Mb L: 100/100 MS: 1 InsertByte- 00:07:13.755 [2024-11-26 18:57:30.761749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.755 [2024-11-26 18:57:30.761776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.761830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.755 [2024-11-26 18:57:30.761850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.761916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.755 [2024-11-26 18:57:30.761935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.755 #49 NEW cov: 12549 ft: 15326 corp: 25/1744b lim: 100 exec/s: 49 rss: 76Mb L: 69/100 MS: 1 ShuffleBytes- 00:07:13.755 [2024-11-26 18:57:30.801962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.755 [2024-11-26 18:57:30.801989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.802047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.755 [2024-11-26 18:57:30.802068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.802134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.755 [2024-11-26 18:57:30.802152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.802233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.755 [2024-11-26 18:57:30.802254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.755 #50 NEW cov: 12549 ft: 15371 corp: 26/1835b lim: 100 exec/s: 50 rss: 76Mb L: 91/100 MS: 1 InsertByte- 00:07:13.755 [2024-11-26 18:57:30.862293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.755 [2024-11-26 18:57:30.862321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.862378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.755 [2024-11-26 18:57:30.862398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.862463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.755 [2024-11-26 18:57:30.862487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.862551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.755 [2024-11-26 18:57:30.862569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.755 [2024-11-26 18:57:30.862633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:13.755 [2024-11-26 18:57:30.862650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:13.755 #51 NEW cov: 12549 ft: 15386 corp: 27/1935b lim: 100 exec/s: 51 rss: 76Mb L: 100/100 MS: 1 CrossOver- 00:07:13.755 [2024-11-26 18:57:30.921962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.755 [2024-11-26 18:57:30.921990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.755 #52 NEW cov: 12549 ft: 15397 corp: 28/1970b lim: 100 exec/s: 52 rss: 76Mb L: 35/100 MS: 1 CopyPart- 00:07:14.014 [2024-11-26 18:57:30.982132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.014 [2024-11-26 18:57:30.982160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.014 #53 NEW cov: 12549 ft: 15459 corp: 29/2006b lim: 100 exec/s: 53 rss: 77Mb L: 36/100 MS: 1 ChangeBit- 00:07:14.014 [2024-11-26 18:57:31.042674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.014 [2024-11-26 18:57:31.042702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.014 [2024-11-26 18:57:31.042757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.014 [2024-11-26 18:57:31.042777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.014 [2024-11-26 18:57:31.042839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.015 [2024-11-26 18:57:31.042861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.042928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:14.015 [2024-11-26 18:57:31.042948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.015 #54 NEW cov: 12549 ft: 15460 corp: 30/2097b lim: 100 exec/s: 54 rss: 77Mb L: 91/100 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:07:14.015 [2024-11-26 18:57:31.102592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.015 [2024-11-26 18:57:31.102619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.102690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.015 [2024-11-26 18:57:31.102709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.015 #55 NEW cov: 12549 ft: 15704 corp: 31/2137b lim: 100 exec/s: 55 rss: 77Mb L: 40/100 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:07:14.015 [2024-11-26 18:57:31.142953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.015 [2024-11-26 18:57:31.142980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.143041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.015 [2024-11-26 18:57:31.143060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.143125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.015 [2024-11-26 18:57:31.143147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.143213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:14.015 [2024-11-26 18:57:31.143233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.015 #56 NEW cov: 12549 ft: 15716 corp: 32/2236b lim: 100 exec/s: 56 rss: 77Mb L: 99/100 MS: 1 ChangeBinInt- 00:07:14.015 [2024-11-26 18:57:31.183199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.015 [2024-11-26 18:57:31.183228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.183280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.015 [2024-11-26 18:57:31.183299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.183363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.015 [2024-11-26 18:57:31.183384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.183448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:14.015 [2024-11-26 18:57:31.183466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.183537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:14.015 [2024-11-26 18:57:31.183556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:14.015 #57 NEW cov: 12549 ft: 15734 corp: 33/2336b lim: 100 exec/s: 57 rss: 77Mb L: 100/100 MS: 1 InsertByte- 00:07:14.015 [2024-11-26 18:57:31.223167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.015 [2024-11-26 18:57:31.223194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.223245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.015 [2024-11-26 18:57:31.223265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.223330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.015 [2024-11-26 18:57:31.223351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.015 [2024-11-26 18:57:31.223420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:14.015 [2024-11-26 18:57:31.223439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.274 #58 NEW cov: 12549 ft: 15768 corp: 34/2417b lim: 100 exec/s: 58 rss: 77Mb L: 81/100 MS: 1 EraseBytes- 00:07:14.274 [2024-11-26 18:57:31.263201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.274 [2024-11-26 18:57:31.263228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.263291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.274 [2024-11-26 18:57:31.263311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.263379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.274 [2024-11-26 18:57:31.263402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.274 #59 NEW cov: 12549 ft: 15790 corp: 35/2486b lim: 100 exec/s: 59 rss: 77Mb L: 69/100 MS: 1 CrossOver- 00:07:14.274 [2024-11-26 18:57:31.303458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.274 [2024-11-26 18:57:31.303490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.303550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.274 [2024-11-26 18:57:31.303569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.303635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.274 [2024-11-26 18:57:31.303656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.303721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:14.274 [2024-11-26 18:57:31.303740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.274 #60 NEW cov: 12549 ft: 15803 corp: 36/2585b lim: 100 exec/s: 60 rss: 77Mb L: 99/100 MS: 1 ChangeByte- 00:07:14.274 [2024-11-26 18:57:31.343572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.274 [2024-11-26 18:57:31.343599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.343657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.274 [2024-11-26 18:57:31.343677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.343743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.274 [2024-11-26 18:57:31.343762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.274 [2024-11-26 18:57:31.343826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:14.274 [2024-11-26 18:57:31.343844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.274 #61 NEW cov: 12549 ft: 15812 corp: 37/2684b lim: 100 exec/s: 30 rss: 77Mb L: 99/100 MS: 1 ChangeBinInt- 00:07:14.274 #61 DONE cov: 12549 ft: 15812 corp: 37/2684b lim: 100 exec/s: 30 rss: 77Mb 00:07:14.274 ###### Recommended dictionary. ###### 00:07:14.274 "\376\377\377\365" # Uses: 2 00:07:14.274 ###### End of recommended dictionary. ###### 00:07:14.274 Done 61 runs in 2 second(s) 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.274 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:14.533 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:14.533 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:14.533 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:14.533 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.533 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.533 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.533 18:57:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:14.533 [2024-11-26 18:57:31.526228] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:14.533 [2024-11-26 18:57:31.526316] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743542 ] 00:07:14.533 [2024-11-26 18:57:31.711825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.792 [2024-11-26 18:57:31.749935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.792 [2024-11-26 18:57:31.809116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.792 [2024-11-26 18:57:31.825271] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:14.792 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.792 INFO: Seed: 1062520413 00:07:14.792 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:14.792 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:14.792 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:14.792 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.792 #2 INITED exec/s: 0 rss: 67Mb 00:07:14.792 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.792 This may also happen if the target rejected all inputs we tried so far 00:07:14.792 [2024-11-26 18:57:31.870249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4627 00:07:14.792 [2024-11-26 18:57:31.870284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.792 [2024-11-26 18:57:31.870321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 00:07:14.792 [2024-11-26 18:57:31.870339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.792 [2024-11-26 18:57:31.870369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 00:07:14.792 [2024-11-26 18:57:31.870385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.792 [2024-11-26 18:57:31.870413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 00:07:14.792 [2024-11-26 18:57:31.870430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.051 NEW_FUNC[1/716]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:15.051 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.051 #15 NEW cov: 12300 ft: 12298 corp: 2/50b lim: 50 exec/s: 0 rss: 74Mb L: 49/49 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:15.051 [2024-11-26 18:57:32.231034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4611 00:07:15.051 [2024-11-26 18:57:32.231079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.310 #21 NEW cov: 12413 ft: 13239 corp: 3/60b lim: 50 exec/s: 0 rss: 75Mb L: 10/49 MS: 1 CrossOver- 00:07:15.310 [2024-11-26 18:57:32.331372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4627 00:07:15.310 [2024-11-26 18:57:32.331406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.310 [2024-11-26 18:57:32.331437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.331455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.311 [2024-11-26 18:57:32.331494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.331511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.311 [2024-11-26 18:57:32.331539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.331556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.311 [2024-11-26 18:57:32.331584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.331601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:15.311 #22 NEW cov: 12419 ft: 13601 corp: 4/110b lim: 50 exec/s: 0 rss: 75Mb L: 50/50 MS: 1 CrossOver- 00:07:15.311 [2024-11-26 18:57:32.391305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13744632839234567870 len:48831 00:07:15.311 [2024-11-26 18:57:32.391337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.311 #26 NEW cov: 12504 ft: 14042 corp: 5/122b lim: 50 exec/s: 0 rss: 75Mb L: 12/50 MS: 4 ChangeBit-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:15.311 [2024-11-26 18:57:32.451592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4627 00:07:15.311 [2024-11-26 18:57:32.451623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.311 [2024-11-26 18:57:32.451654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.451672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.311 [2024-11-26 18:57:32.451702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.451719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.311 [2024-11-26 18:57:32.451746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.451763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.311 [2024-11-26 18:57:32.451790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:1302123111085380114 len:4627 00:07:15.311 [2024-11-26 18:57:32.451807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:15.570 #32 NEW cov: 12504 ft: 14174 corp: 6/172b lim: 50 exec/s: 0 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:15.570 [2024-11-26 18:57:32.542285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123113982377662 len:4627 00:07:15.570 [2024-11-26 18:57:32.542326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.570 #33 NEW cov: 12504 ft: 14390 corp: 7/184b lim: 50 exec/s: 0 rss: 75Mb L: 12/50 MS: 1 CrossOver- 00:07:15.570 [2024-11-26 18:57:32.603102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:15.570 [2024-11-26 18:57:32.603180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.570 [2024-11-26 18:57:32.603290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:15.570 [2024-11-26 18:57:32.603334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.570 [2024-11-26 18:57:32.603440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:15.570 [2024-11-26 18:57:32.603496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.570 [2024-11-26 18:57:32.603605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:15.570 [2024-11-26 18:57:32.603649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.570 #36 NEW cov: 12504 ft: 14473 corp: 8/228b lim: 50 exec/s: 0 rss: 75Mb L: 44/50 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:07:15.570 [2024-11-26 18:57:32.652466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123282615636498 len:4627 00:07:15.570 [2024-11-26 18:57:32.652500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.570 #37 NEW cov: 12504 ft: 14500 corp: 9/239b lim: 50 exec/s: 0 rss: 75Mb L: 11/50 MS: 1 InsertByte- 00:07:15.570 [2024-11-26 18:57:32.712644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123113982377662 len:4799 00:07:15.570 [2024-11-26 18:57:32.712670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.570 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:15.570 #38 NEW cov: 12521 ft: 14540 corp: 10/251b lim: 50 exec/s: 0 rss: 75Mb L: 12/50 MS: 1 ShuffleBytes- 00:07:15.570 [2024-11-26 18:57:32.773135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4627 00:07:15.570 [2024-11-26 18:57:32.773161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.570 [2024-11-26 18:57:32.773215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123111087477266 len:4627 00:07:15.570 [2024-11-26 18:57:32.773231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.570 [2024-11-26 18:57:32.773281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 00:07:15.570 [2024-11-26 18:57:32.773296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.570 [2024-11-26 18:57:32.773348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 00:07:15.570 [2024-11-26 18:57:32.773363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.829 #39 NEW cov: 12521 ft: 14708 corp: 11/300b lim: 50 exec/s: 0 rss: 75Mb L: 49/50 MS: 1 ChangeBit- 00:07:15.830 [2024-11-26 18:57:32.812933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4702119470032666178 len:48831 00:07:15.830 [2024-11-26 18:57:32.812960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.830 #40 NEW cov: 12521 ft: 14727 corp: 12/312b lim: 50 exec/s: 0 rss: 75Mb L: 12/50 MS: 1 ChangeBinInt- 00:07:15.830 [2024-11-26 18:57:32.853039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:149201606478533138 len:4627 00:07:15.830 [2024-11-26 18:57:32.853067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.830 #41 NEW cov: 12521 ft: 14787 corp: 13/327b lim: 50 exec/s: 41 rss: 75Mb L: 15/50 MS: 1 CopyPart- 00:07:15.830 [2024-11-26 18:57:32.913208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:262144 len:4799 00:07:15.830 [2024-11-26 18:57:32.913234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.830 #42 NEW cov: 12521 ft: 14831 corp: 14/339b lim: 50 exec/s: 42 rss: 75Mb L: 12/50 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:07:15.830 [2024-11-26 18:57:32.973386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123113982377662 len:2323 00:07:15.830 [2024-11-26 18:57:32.973412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.830 #43 NEW cov: 12521 ft: 14898 corp: 15/351b lim: 50 exec/s: 43 rss: 75Mb L: 12/50 MS: 1 ChangeBinInt- 00:07:15.830 [2024-11-26 18:57:33.013502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:149201606478533138 len:4644 00:07:15.830 [2024-11-26 18:57:33.013527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.088 #44 NEW cov: 12521 ft: 14938 corp: 16/366b lim: 50 exec/s: 44 rss: 75Mb L: 15/50 MS: 1 ChangeByte- 00:07:16.088 [2024-11-26 18:57:33.074015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13744632839234567870 len:48831 00:07:16.089 [2024-11-26 18:57:33.074043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.089 [2024-11-26 18:57:33.074083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3188457472 len:1 00:07:16.089 [2024-11-26 18:57:33.074099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.089 [2024-11-26 18:57:33.074149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:16.089 [2024-11-26 18:57:33.074163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.089 [2024-11-26 18:57:33.074215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:16.089 [2024-11-26 18:57:33.074231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.089 #45 NEW cov: 12521 ft: 15006 corp: 17/410b lim: 50 exec/s: 45 rss: 75Mb L: 44/50 MS: 1 CrossOver- 00:07:16.089 [2024-11-26 18:57:33.133797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123852716752574 len:4621 00:07:16.089 [2024-11-26 18:57:33.133823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.089 #51 NEW cov: 12521 ft: 15018 corp: 18/420b lim: 50 exec/s: 51 rss: 75Mb L: 10/50 MS: 1 EraseBytes- 00:07:16.089 [2024-11-26 18:57:33.173914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13696029285486083774 len:48659 00:07:16.089 [2024-11-26 18:57:33.173940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.089 #52 NEW cov: 12521 ft: 15067 corp: 19/431b lim: 50 exec/s: 52 rss: 75Mb L: 11/50 MS: 1 CrossOver- 00:07:16.089 [2024-11-26 18:57:33.234086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123118277344958 len:4627 00:07:16.089 [2024-11-26 18:57:33.234113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.089 #53 NEW cov: 12521 ft: 15109 corp: 20/443b lim: 50 exec/s: 53 rss: 75Mb L: 12/50 MS: 1 ChangeBit- 00:07:16.089 [2024-11-26 18:57:33.274296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3187671040 len:1 00:07:16.089 [2024-11-26 18:57:33.274323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.089 [2024-11-26 18:57:33.274363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:16.089 [2024-11-26 18:57:33.274379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.089 #56 NEW cov: 12521 ft: 15359 corp: 21/468b lim: 50 exec/s: 56 rss: 75Mb L: 25/50 MS: 3 EraseBytes-CMP-CrossOver- DE: "\000\000\000\000\000\000\000\004"- 00:07:16.348 [2024-11-26 18:57:33.314750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4627 00:07:16.348 [2024-11-26 18:57:33.314777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.314830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 00:07:16.348 [2024-11-26 18:57:33.314845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.314899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1302123248524333586 len:4627 00:07:16.348 [2024-11-26 18:57:33.314914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.314968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 00:07:16.348 [2024-11-26 18:57:33.314985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.315038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:1302123111085380114 len:4627 00:07:16.348 [2024-11-26 18:57:33.315053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:16.348 #57 NEW cov: 12521 ft: 15400 corp: 22/518b lim: 50 exec/s: 57 rss: 75Mb L: 50/50 MS: 1 ChangeBit- 00:07:16.348 [2024-11-26 18:57:33.374953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4627 00:07:16.348 [2024-11-26 18:57:33.374980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.375038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 00:07:16.348 [2024-11-26 18:57:33.375061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.375116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 00:07:16.348 [2024-11-26 18:57:33.375134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.375191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 00:07:16.348 [2024-11-26 18:57:33.375208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.375264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:1302123113232863762 len:4627 00:07:16.348 [2024-11-26 18:57:33.375283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:16.348 #58 NEW cov: 12521 ft: 15410 corp: 23/568b lim: 50 exec/s: 58 rss: 75Mb L: 50/50 MS: 1 ChangeBit- 00:07:16.348 [2024-11-26 18:57:33.414574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123042365903378 len:4627 00:07:16.348 [2024-11-26 18:57:33.414599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.348 #64 NEW cov: 12521 ft: 15416 corp: 24/583b lim: 50 exec/s: 64 rss: 75Mb L: 15/50 MS: 1 ShuffleBytes- 00:07:16.348 [2024-11-26 18:57:33.454791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12008468688627962386 len:42663 00:07:16.348 [2024-11-26 18:57:33.454816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.454850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11962143431564109478 len:4627 00:07:16.348 [2024-11-26 18:57:33.454866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.348 #65 NEW cov: 12521 ft: 15417 corp: 25/609b lim: 50 exec/s: 65 rss: 75Mb L: 26/50 MS: 1 InsertRepeatedBytes- 00:07:16.348 [2024-11-26 18:57:33.514856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13696029288505982507 len:4627 00:07:16.348 [2024-11-26 18:57:33.514884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.348 #66 NEW cov: 12521 ft: 15435 corp: 26/622b lim: 50 exec/s: 66 rss: 75Mb L: 13/50 MS: 1 InsertByte- 00:07:16.348 [2024-11-26 18:57:33.555139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123042365903378 len:4652 00:07:16.348 [2024-11-26 18:57:33.555167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.348 [2024-11-26 18:57:33.555203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123852705436178 len:3091 00:07:16.348 [2024-11-26 18:57:33.555218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.608 #67 NEW cov: 12521 ft: 15461 corp: 27/647b lim: 50 exec/s: 67 rss: 75Mb L: 25/50 MS: 1 CrossOver- 00:07:16.608 [2024-11-26 18:57:33.615139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123174111919806 len:4621 00:07:16.608 [2024-11-26 18:57:33.615166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.608 #68 NEW cov: 12521 ft: 15558 corp: 28/657b lim: 50 exec/s: 68 rss: 75Mb L: 10/50 MS: 1 ChangeByte- 00:07:16.608 [2024-11-26 18:57:33.655253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:16.608 [2024-11-26 18:57:33.655280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.608 #71 NEW cov: 12521 ft: 15574 corp: 29/675b lim: 50 exec/s: 71 rss: 75Mb L: 18/50 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:16.608 [2024-11-26 18:57:33.695482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123110816944658 len:4627 00:07:16.608 [2024-11-26 18:57:33.695510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.608 [2024-11-26 18:57:33.695561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 00:07:16.608 [2024-11-26 18:57:33.695577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.608 #72 NEW cov: 12521 ft: 15580 corp: 30/703b lim: 50 exec/s: 72 rss: 75Mb L: 28/50 MS: 1 EraseBytes- 00:07:16.608 [2024-11-26 18:57:33.735615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1302123042365903378 len:4652 00:07:16.608 [2024-11-26 18:57:33.735642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.608 [2024-11-26 18:57:33.735680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1302117255635669522 len:4627 00:07:16.608 [2024-11-26 18:57:33.735696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.608 #73 NEW cov: 12528 ft: 15638 corp: 31/728b lim: 50 exec/s: 73 rss: 76Mb L: 25/50 MS: 1 ShuffleBytes- 00:07:16.608 [2024-11-26 18:57:33.796056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:181027469262848 len:42149 00:07:16.608 [2024-11-26 18:57:33.796082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.608 [2024-11-26 18:57:33.796139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11863788345444574372 len:42149 00:07:16.608 [2024-11-26 18:57:33.796154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.608 [2024-11-26 18:57:33.796205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11863607321162982564 len:1 00:07:16.608 [2024-11-26 18:57:33.796221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.608 [2024-11-26 18:57:33.796278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:16.608 [2024-11-26 18:57:33.796294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.867 #74 NEW cov: 12528 ft: 15650 corp: 32/773b lim: 50 exec/s: 74 rss: 76Mb L: 45/50 MS: 1 InsertRepeatedBytes- 00:07:16.867 [2024-11-26 18:57:33.855887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13194139795456 len:1 00:07:16.867 [2024-11-26 18:57:33.855913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.867 #75 NEW cov: 12528 ft: 15668 corp: 33/785b lim: 50 exec/s: 37 rss: 76Mb L: 12/50 MS: 1 CMP- DE: "\014\000\000\000"- 00:07:16.867 #75 DONE cov: 12528 ft: 15668 corp: 33/785b lim: 50 exec/s: 37 rss: 76Mb 00:07:16.867 ###### Recommended dictionary. ###### 00:07:16.867 "\000\004\000\000\000\000\000\000" # Uses: 0 00:07:16.867 "\000\000\000\000\000\000\000\004" # Uses: 0 00:07:16.867 "\014\000\000\000" # Uses: 0 00:07:16.867 ###### End of recommended dictionary. ###### 00:07:16.867 Done 75 runs in 2 second(s) 00:07:16.867 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.867 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.867 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.868 18:57:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:16.868 [2024-11-26 18:57:34.050786] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:16.868 [2024-11-26 18:57:34.050852] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743896 ] 00:07:17.127 [2024-11-26 18:57:34.237792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.127 [2024-11-26 18:57:34.276279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.127 [2024-11-26 18:57:34.335372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.387 [2024-11-26 18:57:34.351531] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:17.387 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.387 INFO: Seed: 3589527956 00:07:17.387 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:17.387 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:17.387 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:17.387 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.387 #2 INITED exec/s: 0 rss: 67Mb 00:07:17.387 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.387 This may also happen if the target rejected all inputs we tried so far 00:07:17.387 [2024-11-26 18:57:34.396516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.387 [2024-11-26 18:57:34.396551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.387 [2024-11-26 18:57:34.396587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.387 [2024-11-26 18:57:34.396605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.387 [2024-11-26 18:57:34.396635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.387 [2024-11-26 18:57:34.396652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.646 NEW_FUNC[1/713]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:17.646 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.646 #18 NEW cov: 12314 ft: 12313 corp: 2/66b lim: 90 exec/s: 0 rss: 74Mb L: 65/65 MS: 1 InsertRepeatedBytes- 00:07:17.646 [2024-11-26 18:57:34.770087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.646 [2024-11-26 18:57:34.770140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.646 [2024-11-26 18:57:34.770221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.646 [2024-11-26 18:57:34.770251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.646 [2024-11-26 18:57:34.770341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.646 [2024-11-26 18:57:34.770363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.646 [2024-11-26 18:57:34.770457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.646 [2024-11-26 18:57:34.770484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.646 NEW_FUNC[1/5]: 0x195b6d8 in spdk_nvme_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:764 00:07:17.646 NEW_FUNC[2/5]: 0x19c8c08 in nvme_transport_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:659 00:07:17.646 #20 NEW cov: 12471 ft: 13429 corp: 3/144b lim: 90 exec/s: 0 rss: 74Mb L: 78/78 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:17.646 [2024-11-26 18:57:34.830405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.646 [2024-11-26 18:57:34.830434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.646 [2024-11-26 18:57:34.830505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.646 [2024-11-26 18:57:34.830528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.647 [2024-11-26 18:57:34.830605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.647 [2024-11-26 18:57:34.830623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.647 [2024-11-26 18:57:34.830713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.647 [2024-11-26 18:57:34.830734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.906 #26 NEW cov: 12477 ft: 13703 corp: 4/222b lim: 90 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 CrossOver- 00:07:17.906 [2024-11-26 18:57:34.900687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.906 [2024-11-26 18:57:34.900718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:34.900787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.906 [2024-11-26 18:57:34.900805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:34.900879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.906 [2024-11-26 18:57:34.900899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:34.900995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.906 [2024-11-26 18:57:34.901015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.906 #27 NEW cov: 12562 ft: 13895 corp: 5/300b lim: 90 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:07:17.906 [2024-11-26 18:57:34.971206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.906 [2024-11-26 18:57:34.971236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:34.971313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.906 [2024-11-26 18:57:34.971332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:34.971416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.906 [2024-11-26 18:57:34.971438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:34.971533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.906 [2024-11-26 18:57:34.971554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.906 #28 NEW cov: 12562 ft: 13956 corp: 6/378b lim: 90 exec/s: 0 rss: 75Mb L: 78/78 MS: 1 ChangeByte- 00:07:17.906 [2024-11-26 18:57:35.041585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.906 [2024-11-26 18:57:35.041614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:35.041691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.906 [2024-11-26 18:57:35.041707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:35.041777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.906 [2024-11-26 18:57:35.041796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:35.041888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.906 [2024-11-26 18:57:35.041908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.906 #29 NEW cov: 12562 ft: 14015 corp: 7/456b lim: 90 exec/s: 0 rss: 75Mb L: 78/78 MS: 1 ChangeByte- 00:07:17.906 [2024-11-26 18:57:35.091770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.906 [2024-11-26 18:57:35.091799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:35.091888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.906 [2024-11-26 18:57:35.091905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.906 [2024-11-26 18:57:35.091990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.906 [2024-11-26 18:57:35.092009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.907 [2024-11-26 18:57:35.092100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.907 [2024-11-26 18:57:35.092118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.166 #30 NEW cov: 12562 ft: 14058 corp: 8/535b lim: 90 exec/s: 0 rss: 75Mb L: 79/79 MS: 1 InsertByte- 00:07:18.166 [2024-11-26 18:57:35.161939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.166 [2024-11-26 18:57:35.161967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.162056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.166 [2024-11-26 18:57:35.162075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.162146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.166 [2024-11-26 18:57:35.162162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.162253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.166 [2024-11-26 18:57:35.162272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.166 #31 NEW cov: 12562 ft: 14137 corp: 9/613b lim: 90 exec/s: 0 rss: 75Mb L: 78/79 MS: 1 ChangeBinInt- 00:07:18.166 [2024-11-26 18:57:35.212157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.166 [2024-11-26 18:57:35.212184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.212265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.166 [2024-11-26 18:57:35.212283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.212362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.166 [2024-11-26 18:57:35.212381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.212475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.166 [2024-11-26 18:57:35.212493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.166 #32 NEW cov: 12562 ft: 14264 corp: 10/692b lim: 90 exec/s: 0 rss: 75Mb L: 79/79 MS: 1 ShuffleBytes- 00:07:18.166 [2024-11-26 18:57:35.282660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.166 [2024-11-26 18:57:35.282690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.282763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.166 [2024-11-26 18:57:35.282783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.282854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.166 [2024-11-26 18:57:35.282872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.282963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.166 [2024-11-26 18:57:35.282984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.166 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:18.166 #33 NEW cov: 12585 ft: 14344 corp: 11/771b lim: 90 exec/s: 0 rss: 75Mb L: 79/79 MS: 1 InsertByte- 00:07:18.166 [2024-11-26 18:57:35.332583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.166 [2024-11-26 18:57:35.332611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.332674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.166 [2024-11-26 18:57:35.332692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.166 [2024-11-26 18:57:35.332773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.166 [2024-11-26 18:57:35.332794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.166 #34 NEW cov: 12585 ft: 14394 corp: 12/825b lim: 90 exec/s: 0 rss: 75Mb L: 54/79 MS: 1 CrossOver- 00:07:18.427 [2024-11-26 18:57:35.403242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.427 [2024-11-26 18:57:35.403271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.403343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.427 [2024-11-26 18:57:35.403361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.403443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.427 [2024-11-26 18:57:35.403462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.403556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.427 [2024-11-26 18:57:35.403578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.427 #35 NEW cov: 12585 ft: 14422 corp: 13/911b lim: 90 exec/s: 35 rss: 75Mb L: 86/86 MS: 1 PersAutoDict- DE: "\020\000\000\000\000\000\000\000"- 00:07:18.427 [2024-11-26 18:57:35.453421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.427 [2024-11-26 18:57:35.453450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.453526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.427 [2024-11-26 18:57:35.453547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.453615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.427 [2024-11-26 18:57:35.453635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.453722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.427 [2024-11-26 18:57:35.453743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.427 #41 NEW cov: 12585 ft: 14461 corp: 14/990b lim: 90 exec/s: 41 rss: 75Mb L: 79/86 MS: 1 PersAutoDict- DE: "\020\000\000\000\000\000\000\000"- 00:07:18.427 [2024-11-26 18:57:35.503659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.427 [2024-11-26 18:57:35.503687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.503759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.427 [2024-11-26 18:57:35.503777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.503855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.427 [2024-11-26 18:57:35.503874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.503962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.427 [2024-11-26 18:57:35.503982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.427 #42 NEW cov: 12585 ft: 14480 corp: 15/1068b lim: 90 exec/s: 42 rss: 75Mb L: 78/86 MS: 1 PersAutoDict- DE: "\020\000\000\000\000\000\000\000"- 00:07:18.427 [2024-11-26 18:57:35.553857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.427 [2024-11-26 18:57:35.553888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.553951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.427 [2024-11-26 18:57:35.553969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.554048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.427 [2024-11-26 18:57:35.554068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.554152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.427 [2024-11-26 18:57:35.554172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.427 #43 NEW cov: 12585 ft: 14518 corp: 16/1155b lim: 90 exec/s: 43 rss: 75Mb L: 87/87 MS: 1 InsertByte- 00:07:18.427 [2024-11-26 18:57:35.624085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.427 [2024-11-26 18:57:35.624117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.624194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.427 [2024-11-26 18:57:35.624213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.624282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.427 [2024-11-26 18:57:35.624299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.427 [2024-11-26 18:57:35.624384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.427 [2024-11-26 18:57:35.624400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.687 #44 NEW cov: 12585 ft: 14551 corp: 17/1233b lim: 90 exec/s: 44 rss: 75Mb L: 78/87 MS: 1 PersAutoDict- DE: "\020\000\000\000\000\000\000\000"- 00:07:18.687 [2024-11-26 18:57:35.694420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.687 [2024-11-26 18:57:35.694447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.687 [2024-11-26 18:57:35.694541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.687 [2024-11-26 18:57:35.694562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.687 [2024-11-26 18:57:35.694630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.687 [2024-11-26 18:57:35.694648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.687 [2024-11-26 18:57:35.694734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.687 [2024-11-26 18:57:35.694754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.687 #45 NEW cov: 12585 ft: 14570 corp: 18/1312b lim: 90 exec/s: 45 rss: 75Mb L: 79/87 MS: 1 ChangeBit- 00:07:18.687 [2024-11-26 18:57:35.764086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.687 [2024-11-26 18:57:35.764112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.687 [2024-11-26 18:57:35.764171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.687 [2024-11-26 18:57:35.764188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.687 #46 NEW cov: 12585 ft: 14907 corp: 19/1355b lim: 90 exec/s: 46 rss: 75Mb L: 43/87 MS: 1 EraseBytes- 00:07:18.687 [2024-11-26 18:57:35.835287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.687 [2024-11-26 18:57:35.835315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.687 [2024-11-26 18:57:35.835394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.687 [2024-11-26 18:57:35.835411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.687 [2024-11-26 18:57:35.835494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.687 [2024-11-26 18:57:35.835512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.687 [2024-11-26 18:57:35.835607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.687 [2024-11-26 18:57:35.835627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.687 #47 NEW cov: 12585 ft: 14915 corp: 20/1434b lim: 90 exec/s: 47 rss: 75Mb L: 79/87 MS: 1 ShuffleBytes- 00:07:18.687 [2024-11-26 18:57:35.885475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.687 [2024-11-26 18:57:35.885502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.688 [2024-11-26 18:57:35.885588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.688 [2024-11-26 18:57:35.885604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.688 [2024-11-26 18:57:35.885691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.688 [2024-11-26 18:57:35.885710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.688 [2024-11-26 18:57:35.885793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.688 [2024-11-26 18:57:35.885812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.947 #48 NEW cov: 12585 ft: 14968 corp: 21/1513b lim: 90 exec/s: 48 rss: 75Mb L: 79/87 MS: 1 CrossOver- 00:07:18.947 [2024-11-26 18:57:35.955889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.947 [2024-11-26 18:57:35.955918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:35.956001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.947 [2024-11-26 18:57:35.956018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:35.956093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.947 [2024-11-26 18:57:35.956111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:35.956195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.947 [2024-11-26 18:57:35.956216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.947 #49 NEW cov: 12585 ft: 14979 corp: 22/1592b lim: 90 exec/s: 49 rss: 75Mb L: 79/87 MS: 1 CMP- DE: "\366\377\377\377"- 00:07:18.947 [2024-11-26 18:57:36.005985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.947 [2024-11-26 18:57:36.006014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:36.006090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.947 [2024-11-26 18:57:36.006107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:36.006183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.947 [2024-11-26 18:57:36.006203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:36.006290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.947 [2024-11-26 18:57:36.006311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.947 #50 NEW cov: 12585 ft: 15001 corp: 23/1670b lim: 90 exec/s: 50 rss: 75Mb L: 78/87 MS: 1 ChangeBinInt- 00:07:18.947 [2024-11-26 18:57:36.056486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.947 [2024-11-26 18:57:36.056513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:36.056595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.947 [2024-11-26 18:57:36.056613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:36.056693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.947 [2024-11-26 18:57:36.056714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:36.056806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.947 [2024-11-26 18:57:36.056827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.947 #51 NEW cov: 12585 ft: 15006 corp: 24/1753b lim: 90 exec/s: 51 rss: 75Mb L: 83/87 MS: 1 CrossOver- 00:07:18.947 [2024-11-26 18:57:36.106757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.947 [2024-11-26 18:57:36.106786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.947 [2024-11-26 18:57:36.106864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.947 [2024-11-26 18:57:36.106880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.948 [2024-11-26 18:57:36.106948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.948 [2024-11-26 18:57:36.106966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.948 [2024-11-26 18:57:36.107061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.948 [2024-11-26 18:57:36.107079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.948 #52 NEW cov: 12585 ft: 15032 corp: 25/1842b lim: 90 exec/s: 52 rss: 75Mb L: 89/89 MS: 1 CopyPart- 00:07:19.207 [2024-11-26 18:57:36.177110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.207 [2024-11-26 18:57:36.177139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.177226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.207 [2024-11-26 18:57:36.177243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.177322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.207 [2024-11-26 18:57:36.177340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.177428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.207 [2024-11-26 18:57:36.177447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.207 #53 NEW cov: 12585 ft: 15044 corp: 26/1921b lim: 90 exec/s: 53 rss: 75Mb L: 79/89 MS: 1 PersAutoDict- DE: "\366\377\377\377"- 00:07:19.207 [2024-11-26 18:57:36.227060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.207 [2024-11-26 18:57:36.227091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.227147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.207 [2024-11-26 18:57:36.227163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.227232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.207 [2024-11-26 18:57:36.227252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.207 #59 NEW cov: 12585 ft: 15088 corp: 27/1979b lim: 90 exec/s: 59 rss: 76Mb L: 58/89 MS: 1 CopyPart- 00:07:19.207 [2024-11-26 18:57:36.297884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.207 [2024-11-26 18:57:36.297913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.297986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.207 [2024-11-26 18:57:36.298006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.298075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.207 [2024-11-26 18:57:36.298095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.298184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.207 [2024-11-26 18:57:36.298203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.207 #60 NEW cov: 12585 ft: 15099 corp: 28/2057b lim: 90 exec/s: 60 rss: 76Mb L: 78/89 MS: 1 ChangeBit- 00:07:19.207 [2024-11-26 18:57:36.348154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.207 [2024-11-26 18:57:36.348182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.348267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.207 [2024-11-26 18:57:36.348287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.348359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.207 [2024-11-26 18:57:36.348377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.348467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.207 [2024-11-26 18:57:36.348489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.207 #61 NEW cov: 12585 ft: 15111 corp: 29/2136b lim: 90 exec/s: 61 rss: 76Mb L: 79/89 MS: 1 CrossOver- 00:07:19.207 [2024-11-26 18:57:36.398512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.207 [2024-11-26 18:57:36.398540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.398628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.207 [2024-11-26 18:57:36.398644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.398727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.207 [2024-11-26 18:57:36.398746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.207 [2024-11-26 18:57:36.398836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.207 [2024-11-26 18:57:36.398853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.467 #62 NEW cov: 12585 ft: 15211 corp: 30/2215b lim: 90 exec/s: 31 rss: 76Mb L: 79/89 MS: 1 ShuffleBytes- 00:07:19.467 #62 DONE cov: 12585 ft: 15211 corp: 30/2215b lim: 90 exec/s: 31 rss: 76Mb 00:07:19.467 ###### Recommended dictionary. ###### 00:07:19.467 "\020\000\000\000\000\000\000\000" # Uses: 6 00:07:19.467 "\366\377\377\377" # Uses: 1 00:07:19.467 ###### End of recommended dictionary. ###### 00:07:19.467 Done 62 runs in 2 second(s) 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.467 18:57:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:19.467 [2024-11-26 18:57:36.593192] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:19.467 [2024-11-26 18:57:36.593256] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744255 ] 00:07:19.727 [2024-11-26 18:57:36.779305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.727 [2024-11-26 18:57:36.817403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.727 [2024-11-26 18:57:36.876347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.727 [2024-11-26 18:57:36.892514] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:19.727 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.727 INFO: Seed: 1835550136 00:07:19.727 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:19.727 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:19.727 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.727 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.727 #2 INITED exec/s: 0 rss: 67Mb 00:07:19.727 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.727 This may also happen if the target rejected all inputs we tried so far 00:07:19.986 [2024-11-26 18:57:36.947441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.986 [2024-11-26 18:57:36.947484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.986 [2024-11-26 18:57:36.947521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.986 [2024-11-26 18:57:36.947539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.245 NEW_FUNC[1/718]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:20.245 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.245 #6 NEW cov: 12333 ft: 12332 corp: 2/24b lim: 50 exec/s: 0 rss: 75Mb L: 23/23 MS: 4 CopyPart-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:20.245 [2024-11-26 18:57:37.299283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.245 [2024-11-26 18:57:37.299325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.245 [2024-11-26 18:57:37.299361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.245 [2024-11-26 18:57:37.299380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.245 #8 NEW cov: 12446 ft: 12831 corp: 3/48b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 2 ChangeBit-CrossOver- 00:07:20.245 [2024-11-26 18:57:37.359286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.245 [2024-11-26 18:57:37.359319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.245 [2024-11-26 18:57:37.359354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.245 [2024-11-26 18:57:37.359372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.245 #9 NEW cov: 12452 ft: 13121 corp: 4/72b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBinInt- 00:07:20.245 [2024-11-26 18:57:37.449536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.245 [2024-11-26 18:57:37.449568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.245 [2024-11-26 18:57:37.449603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.245 [2024-11-26 18:57:37.449621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #10 NEW cov: 12537 ft: 13389 corp: 5/96b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBit- 00:07:20.506 [2024-11-26 18:57:37.540408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-11-26 18:57:37.540439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-11-26 18:57:37.540516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-11-26 18:57:37.540544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #16 NEW cov: 12537 ft: 13565 corp: 6/120b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 CrossOver- 00:07:20.506 [2024-11-26 18:57:37.600597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-11-26 18:57:37.600626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-11-26 18:57:37.600697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-11-26 18:57:37.600718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #17 NEW cov: 12537 ft: 13758 corp: 7/144b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBinInt- 00:07:20.506 [2024-11-26 18:57:37.640687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-11-26 18:57:37.640716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-11-26 18:57:37.640786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-11-26 18:57:37.640810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #18 NEW cov: 12537 ft: 13954 corp: 8/168b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 CrossOver- 00:07:20.506 [2024-11-26 18:57:37.680798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.506 [2024-11-26 18:57:37.680827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.506 [2024-11-26 18:57:37.680896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.506 [2024-11-26 18:57:37.680917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.506 #19 NEW cov: 12537 ft: 13991 corp: 9/192b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBit- 00:07:20.765 [2024-11-26 18:57:37.720904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.765 [2024-11-26 18:57:37.720933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.765 [2024-11-26 18:57:37.721005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.765 [2024-11-26 18:57:37.721027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.765 #20 NEW cov: 12537 ft: 14026 corp: 10/216b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBit- 00:07:20.765 [2024-11-26 18:57:37.761016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.765 [2024-11-26 18:57:37.761044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.765 [2024-11-26 18:57:37.761111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.765 [2024-11-26 18:57:37.761133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.765 #21 NEW cov: 12537 ft: 14057 corp: 11/240b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeASCIIInt- 00:07:20.765 [2024-11-26 18:57:37.821207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.765 [2024-11-26 18:57:37.821236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.765 [2024-11-26 18:57:37.821308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.765 [2024-11-26 18:57:37.821336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.765 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:20.765 #22 NEW cov: 12554 ft: 14207 corp: 12/264b lim: 50 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ShuffleBytes- 00:07:20.765 [2024-11-26 18:57:37.881196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.765 [2024-11-26 18:57:37.881225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.765 #23 NEW cov: 12554 ft: 14964 corp: 13/276b lim: 50 exec/s: 0 rss: 75Mb L: 12/24 MS: 1 CrossOver- 00:07:20.765 [2024-11-26 18:57:37.921456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.765 [2024-11-26 18:57:37.921491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.765 [2024-11-26 18:57:37.921561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.765 [2024-11-26 18:57:37.921583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.765 #24 NEW cov: 12554 ft: 15014 corp: 14/300b lim: 50 exec/s: 24 rss: 75Mb L: 24/24 MS: 1 ShuffleBytes- 00:07:21.024 [2024-11-26 18:57:37.981674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.024 [2024-11-26 18:57:37.981704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.024 [2024-11-26 18:57:37.981774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.024 [2024-11-26 18:57:37.981798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.024 #25 NEW cov: 12554 ft: 15027 corp: 15/324b lim: 50 exec/s: 25 rss: 75Mb L: 24/24 MS: 1 CMP- DE: "\000\015"- 00:07:21.024 [2024-11-26 18:57:38.021717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.024 [2024-11-26 18:57:38.021744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.024 [2024-11-26 18:57:38.021813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.024 [2024-11-26 18:57:38.021834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.024 #26 NEW cov: 12554 ft: 15099 corp: 16/348b lim: 50 exec/s: 26 rss: 75Mb L: 24/24 MS: 1 CopyPart- 00:07:21.024 [2024-11-26 18:57:38.082082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.024 [2024-11-26 18:57:38.082110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.024 [2024-11-26 18:57:38.082174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.024 [2024-11-26 18:57:38.082195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.024 [2024-11-26 18:57:38.082264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.024 [2024-11-26 18:57:38.082287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.024 #27 NEW cov: 12554 ft: 15426 corp: 17/386b lim: 50 exec/s: 27 rss: 75Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:21.024 [2024-11-26 18:57:38.141908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.024 [2024-11-26 18:57:38.141937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.024 #28 NEW cov: 12554 ft: 15440 corp: 18/398b lim: 50 exec/s: 28 rss: 75Mb L: 12/38 MS: 1 PersAutoDict- DE: "\000\015"- 00:07:21.024 [2024-11-26 18:57:38.202076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.024 [2024-11-26 18:57:38.202104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.024 #29 NEW cov: 12554 ft: 15454 corp: 19/410b lim: 50 exec/s: 29 rss: 75Mb L: 12/38 MS: 1 ChangeByte- 00:07:21.284 [2024-11-26 18:57:38.242331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.284 [2024-11-26 18:57:38.242360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.284 [2024-11-26 18:57:38.242433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.284 [2024-11-26 18:57:38.242456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.284 #30 NEW cov: 12554 ft: 15470 corp: 20/436b lim: 50 exec/s: 30 rss: 75Mb L: 26/38 MS: 1 PersAutoDict- DE: "\000\015"- 00:07:21.284 [2024-11-26 18:57:38.302521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.284 [2024-11-26 18:57:38.302550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.284 [2024-11-26 18:57:38.302621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.284 [2024-11-26 18:57:38.302643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.284 #31 NEW cov: 12554 ft: 15565 corp: 21/462b lim: 50 exec/s: 31 rss: 75Mb L: 26/38 MS: 1 PersAutoDict- DE: "\000\015"- 00:07:21.284 [2024-11-26 18:57:38.342475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.284 [2024-11-26 18:57:38.342504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.284 #32 NEW cov: 12554 ft: 15612 corp: 22/473b lim: 50 exec/s: 32 rss: 75Mb L: 11/38 MS: 1 EraseBytes- 00:07:21.284 [2024-11-26 18:57:38.402809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.284 [2024-11-26 18:57:38.402839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.284 [2024-11-26 18:57:38.402913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.284 [2024-11-26 18:57:38.402934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.284 #33 NEW cov: 12554 ft: 15626 corp: 23/502b lim: 50 exec/s: 33 rss: 75Mb L: 29/38 MS: 1 EraseBytes- 00:07:21.284 [2024-11-26 18:57:38.462839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.284 [2024-11-26 18:57:38.462868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.544 #34 NEW cov: 12554 ft: 15666 corp: 24/516b lim: 50 exec/s: 34 rss: 76Mb L: 14/38 MS: 1 CopyPart- 00:07:21.544 [2024-11-26 18:57:38.523168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.544 [2024-11-26 18:57:38.523196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.544 [2024-11-26 18:57:38.523262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.544 [2024-11-26 18:57:38.523284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.544 #36 NEW cov: 12554 ft: 15704 corp: 25/541b lim: 50 exec/s: 36 rss: 76Mb L: 25/38 MS: 2 ChangeByte-CrossOver- 00:07:21.544 [2024-11-26 18:57:38.563207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.544 [2024-11-26 18:57:38.563235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.544 [2024-11-26 18:57:38.563302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.544 [2024-11-26 18:57:38.563325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.544 #37 NEW cov: 12554 ft: 15718 corp: 26/564b lim: 50 exec/s: 37 rss: 76Mb L: 23/38 MS: 1 EraseBytes- 00:07:21.544 [2024-11-26 18:57:38.603341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.544 [2024-11-26 18:57:38.603368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.544 [2024-11-26 18:57:38.603441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.544 [2024-11-26 18:57:38.603464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.544 #38 NEW cov: 12554 ft: 15752 corp: 27/588b lim: 50 exec/s: 38 rss: 76Mb L: 24/38 MS: 1 ChangeBinInt- 00:07:21.544 [2024-11-26 18:57:38.643301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.544 [2024-11-26 18:57:38.643329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.544 #39 NEW cov: 12554 ft: 15759 corp: 28/600b lim: 50 exec/s: 39 rss: 76Mb L: 12/38 MS: 1 ChangeByte- 00:07:21.544 [2024-11-26 18:57:38.683712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.544 [2024-11-26 18:57:38.683740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.544 [2024-11-26 18:57:38.683804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.544 [2024-11-26 18:57:38.683827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.544 [2024-11-26 18:57:38.683896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.544 [2024-11-26 18:57:38.683919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.544 #40 NEW cov: 12554 ft: 15772 corp: 29/637b lim: 50 exec/s: 40 rss: 76Mb L: 37/38 MS: 1 CopyPart- 00:07:21.544 [2024-11-26 18:57:38.723678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.544 [2024-11-26 18:57:38.723706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.544 [2024-11-26 18:57:38.723774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.544 [2024-11-26 18:57:38.723796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.544 #41 NEW cov: 12554 ft: 15807 corp: 30/661b lim: 50 exec/s: 41 rss: 76Mb L: 24/38 MS: 1 ShuffleBytes- 00:07:21.805 [2024-11-26 18:57:38.763765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.805 [2024-11-26 18:57:38.763796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.805 [2024-11-26 18:57:38.763863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.805 [2024-11-26 18:57:38.763884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.805 #42 NEW cov: 12554 ft: 15828 corp: 31/685b lim: 50 exec/s: 42 rss: 76Mb L: 24/38 MS: 1 ChangeBit- 00:07:21.805 [2024-11-26 18:57:38.803893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.805 [2024-11-26 18:57:38.803923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.805 [2024-11-26 18:57:38.804007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.805 [2024-11-26 18:57:38.804031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.805 #43 NEW cov: 12561 ft: 15871 corp: 32/709b lim: 50 exec/s: 43 rss: 76Mb L: 24/38 MS: 1 ShuffleBytes- 00:07:21.805 [2024-11-26 18:57:38.844063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.805 [2024-11-26 18:57:38.844091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.805 [2024-11-26 18:57:38.844164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.805 [2024-11-26 18:57:38.844186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.805 #49 NEW cov: 12561 ft: 15882 corp: 33/738b lim: 50 exec/s: 49 rss: 76Mb L: 29/38 MS: 1 ChangeBinInt- 00:07:21.805 [2024-11-26 18:57:38.904220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.805 [2024-11-26 18:57:38.904247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.805 [2024-11-26 18:57:38.904317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.805 [2024-11-26 18:57:38.904339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.805 #50 NEW cov: 12561 ft: 15921 corp: 34/762b lim: 50 exec/s: 25 rss: 76Mb L: 24/38 MS: 1 ShuffleBytes- 00:07:21.805 #50 DONE cov: 12561 ft: 15921 corp: 34/762b lim: 50 exec/s: 25 rss: 76Mb 00:07:21.805 ###### Recommended dictionary. ###### 00:07:21.805 "\000\015" # Uses: 4 00:07:21.805 ###### End of recommended dictionary. ###### 00:07:21.805 Done 50 runs in 2 second(s) 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.064 18:57:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:22.064 [2024-11-26 18:57:39.101572] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:22.064 [2024-11-26 18:57:39.101640] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744596 ] 00:07:22.324 [2024-11-26 18:57:39.293487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.324 [2024-11-26 18:57:39.331838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.324 [2024-11-26 18:57:39.390837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.324 [2024-11-26 18:57:39.406998] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:22.324 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.324 INFO: Seed: 54579733 00:07:22.324 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:22.324 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:22.324 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:22.324 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.324 #2 INITED exec/s: 0 rss: 67Mb 00:07:22.324 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.324 This may also happen if the target rejected all inputs we tried so far 00:07:22.324 [2024-11-26 18:57:39.462611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.324 [2024-11-26 18:57:39.462643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.324 [2024-11-26 18:57:39.462705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.324 [2024-11-26 18:57:39.462725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.324 [2024-11-26 18:57:39.462788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.324 [2024-11-26 18:57:39.462810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.583 NEW_FUNC[1/718]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:22.583 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.583 #7 NEW cov: 12359 ft: 12358 corp: 2/56b lim: 85 exec/s: 0 rss: 74Mb L: 55/55 MS: 5 ChangeBit-InsertByte-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:22.583 [2024-11-26 18:57:39.783456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.583 [2024-11-26 18:57:39.783512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.583 [2024-11-26 18:57:39.783589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.583 [2024-11-26 18:57:39.783618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.583 [2024-11-26 18:57:39.783700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.583 [2024-11-26 18:57:39.783728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.842 #8 NEW cov: 12472 ft: 13049 corp: 3/111b lim: 85 exec/s: 0 rss: 75Mb L: 55/55 MS: 1 CrossOver- 00:07:22.842 [2024-11-26 18:57:39.843485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.842 [2024-11-26 18:57:39.843514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.842 [2024-11-26 18:57:39.843568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.842 [2024-11-26 18:57:39.843589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.842 [2024-11-26 18:57:39.843653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.842 [2024-11-26 18:57:39.843674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.842 #9 NEW cov: 12478 ft: 13182 corp: 4/175b lim: 85 exec/s: 0 rss: 75Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:07:22.842 [2024-11-26 18:57:39.903338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.842 [2024-11-26 18:57:39.903368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.842 #14 NEW cov: 12563 ft: 14294 corp: 5/207b lim: 85 exec/s: 0 rss: 75Mb L: 32/64 MS: 5 InsertByte-ChangeBinInt-ChangeBit-CrossOver-CrossOver- 00:07:22.842 [2024-11-26 18:57:39.943585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.842 [2024-11-26 18:57:39.943613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.842 [2024-11-26 18:57:39.943680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.842 [2024-11-26 18:57:39.943701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.842 #15 NEW cov: 12563 ft: 14640 corp: 6/252b lim: 85 exec/s: 0 rss: 75Mb L: 45/64 MS: 1 InsertRepeatedBytes- 00:07:22.842 [2024-11-26 18:57:39.983833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.842 [2024-11-26 18:57:39.983861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.842 [2024-11-26 18:57:39.983922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.842 [2024-11-26 18:57:39.983943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.842 [2024-11-26 18:57:39.984009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.842 [2024-11-26 18:57:39.984031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.842 #16 NEW cov: 12563 ft: 14705 corp: 7/316b lim: 85 exec/s: 0 rss: 75Mb L: 64/64 MS: 1 ChangeByte- 00:07:23.102 [2024-11-26 18:57:40.054242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.102 [2024-11-26 18:57:40.054280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.054346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.102 [2024-11-26 18:57:40.054367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.054443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.102 [2024-11-26 18:57:40.054464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.054538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.102 [2024-11-26 18:57:40.054561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.102 #17 NEW cov: 12563 ft: 15168 corp: 8/399b lim: 85 exec/s: 0 rss: 75Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:07:23.102 [2024-11-26 18:57:40.094290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.102 [2024-11-26 18:57:40.094327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.094392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.102 [2024-11-26 18:57:40.094413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.094483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.102 [2024-11-26 18:57:40.094505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.102 #18 NEW cov: 12563 ft: 15216 corp: 9/462b lim: 85 exec/s: 0 rss: 75Mb L: 63/83 MS: 1 InsertRepeatedBytes- 00:07:23.102 [2024-11-26 18:57:40.134431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.102 [2024-11-26 18:57:40.134462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.134523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.102 [2024-11-26 18:57:40.134544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.134604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.102 [2024-11-26 18:57:40.134626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.134690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.102 [2024-11-26 18:57:40.134709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.102 #19 NEW cov: 12563 ft: 15274 corp: 10/545b lim: 85 exec/s: 0 rss: 75Mb L: 83/83 MS: 1 CopyPart- 00:07:23.102 [2024-11-26 18:57:40.194289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.102 [2024-11-26 18:57:40.194319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.194387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.102 [2024-11-26 18:57:40.194409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.102 #20 NEW cov: 12563 ft: 15349 corp: 11/590b lim: 85 exec/s: 0 rss: 75Mb L: 45/83 MS: 1 ShuffleBytes- 00:07:23.102 [2024-11-26 18:57:40.254749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.102 [2024-11-26 18:57:40.254794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.254848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.102 [2024-11-26 18:57:40.254874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.254939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.102 [2024-11-26 18:57:40.254962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.255027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.102 [2024-11-26 18:57:40.255045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.102 #21 NEW cov: 12563 ft: 15372 corp: 12/673b lim: 85 exec/s: 0 rss: 75Mb L: 83/83 MS: 1 ChangeBinInt- 00:07:23.102 [2024-11-26 18:57:40.294833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.102 [2024-11-26 18:57:40.294863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.294922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.102 [2024-11-26 18:57:40.294943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.295005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.102 [2024-11-26 18:57:40.295025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.102 [2024-11-26 18:57:40.295090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.103 [2024-11-26 18:57:40.295109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.363 #22 NEW cov: 12563 ft: 15449 corp: 13/756b lim: 85 exec/s: 0 rss: 75Mb L: 83/83 MS: 1 CrossOver- 00:07:23.363 [2024-11-26 18:57:40.335023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.363 [2024-11-26 18:57:40.335055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.335112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.363 [2024-11-26 18:57:40.335135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.335205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.363 [2024-11-26 18:57:40.335229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.335299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.363 [2024-11-26 18:57:40.335319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.363 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:23.363 #23 NEW cov: 12586 ft: 15518 corp: 14/840b lim: 85 exec/s: 0 rss: 75Mb L: 84/84 MS: 1 InsertByte- 00:07:23.363 [2024-11-26 18:57:40.394979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.363 [2024-11-26 18:57:40.395007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.395062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.363 [2024-11-26 18:57:40.395084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.395154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.363 [2024-11-26 18:57:40.395175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.363 #24 NEW cov: 12586 ft: 15539 corp: 15/897b lim: 85 exec/s: 0 rss: 75Mb L: 57/84 MS: 1 CMP- DE: "\376\377"- 00:07:23.363 [2024-11-26 18:57:40.435287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.363 [2024-11-26 18:57:40.435318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.435378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.363 [2024-11-26 18:57:40.435401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.435478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.363 [2024-11-26 18:57:40.435500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.435569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.363 [2024-11-26 18:57:40.435590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.363 #25 NEW cov: 12586 ft: 15546 corp: 16/980b lim: 85 exec/s: 25 rss: 75Mb L: 83/84 MS: 1 ShuffleBytes- 00:07:23.363 [2024-11-26 18:57:40.475219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.363 [2024-11-26 18:57:40.475248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.475308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.363 [2024-11-26 18:57:40.475329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.475394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.363 [2024-11-26 18:57:40.475417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.363 #26 NEW cov: 12586 ft: 15570 corp: 17/1037b lim: 85 exec/s: 26 rss: 75Mb L: 57/84 MS: 1 ChangeByte- 00:07:23.363 [2024-11-26 18:57:40.535518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.363 [2024-11-26 18:57:40.535548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.535604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.363 [2024-11-26 18:57:40.535625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.535691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.363 [2024-11-26 18:57:40.535714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.363 [2024-11-26 18:57:40.535780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.363 [2024-11-26 18:57:40.535799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.623 #27 NEW cov: 12586 ft: 15574 corp: 18/1120b lim: 85 exec/s: 27 rss: 75Mb L: 83/84 MS: 1 CopyPart- 00:07:23.623 [2024-11-26 18:57:40.595715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.623 [2024-11-26 18:57:40.595747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.595807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.623 [2024-11-26 18:57:40.595830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.595895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.623 [2024-11-26 18:57:40.595915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.595980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.623 [2024-11-26 18:57:40.595999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.623 #29 NEW cov: 12586 ft: 15586 corp: 19/1195b lim: 85 exec/s: 29 rss: 75Mb L: 75/84 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\376\377"- 00:07:23.623 [2024-11-26 18:57:40.635826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.623 [2024-11-26 18:57:40.635854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.635910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.623 [2024-11-26 18:57:40.635931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.635995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.623 [2024-11-26 18:57:40.636018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.636083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.623 [2024-11-26 18:57:40.636103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.623 #30 NEW cov: 12586 ft: 15667 corp: 20/1266b lim: 85 exec/s: 30 rss: 75Mb L: 71/84 MS: 1 EraseBytes- 00:07:23.623 [2024-11-26 18:57:40.695837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.623 [2024-11-26 18:57:40.695866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.695923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.623 [2024-11-26 18:57:40.695945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.696009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.623 [2024-11-26 18:57:40.696032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.623 #36 NEW cov: 12586 ft: 15686 corp: 21/1330b lim: 85 exec/s: 36 rss: 75Mb L: 64/84 MS: 1 ChangeBit- 00:07:23.623 [2024-11-26 18:57:40.735928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.623 [2024-11-26 18:57:40.735956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.736014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.623 [2024-11-26 18:57:40.736036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.623 [2024-11-26 18:57:40.736108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.623 [2024-11-26 18:57:40.736129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.624 #37 NEW cov: 12586 ft: 15718 corp: 22/1388b lim: 85 exec/s: 37 rss: 76Mb L: 58/84 MS: 1 InsertByte- 00:07:23.624 [2024-11-26 18:57:40.796270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.624 [2024-11-26 18:57:40.796298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.624 [2024-11-26 18:57:40.796357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.624 [2024-11-26 18:57:40.796379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.624 [2024-11-26 18:57:40.796446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.624 [2024-11-26 18:57:40.796469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.624 [2024-11-26 18:57:40.796544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.624 [2024-11-26 18:57:40.796567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.624 #38 NEW cov: 12586 ft: 15738 corp: 23/1462b lim: 85 exec/s: 38 rss: 76Mb L: 74/84 MS: 1 InsertRepeatedBytes- 00:07:23.883 [2024-11-26 18:57:40.836083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.883 [2024-11-26 18:57:40.836111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:40.836176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.883 [2024-11-26 18:57:40.836197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.883 #39 NEW cov: 12586 ft: 15745 corp: 24/1509b lim: 85 exec/s: 39 rss: 76Mb L: 47/84 MS: 1 EraseBytes- 00:07:23.883 [2024-11-26 18:57:40.876037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.883 [2024-11-26 18:57:40.876065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.883 #40 NEW cov: 12586 ft: 15807 corp: 25/1538b lim: 85 exec/s: 40 rss: 76Mb L: 29/84 MS: 1 EraseBytes- 00:07:23.883 [2024-11-26 18:57:40.916340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.883 [2024-11-26 18:57:40.916368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:40.916431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.883 [2024-11-26 18:57:40.916452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.883 #41 NEW cov: 12586 ft: 15825 corp: 26/1584b lim: 85 exec/s: 41 rss: 76Mb L: 46/84 MS: 1 InsertByte- 00:07:23.883 [2024-11-26 18:57:40.976790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.883 [2024-11-26 18:57:40.976819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:40.976872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.883 [2024-11-26 18:57:40.976894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:40.976961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.883 [2024-11-26 18:57:40.976984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:40.977049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.883 [2024-11-26 18:57:40.977068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.883 #42 NEW cov: 12586 ft: 15843 corp: 27/1667b lim: 85 exec/s: 42 rss: 76Mb L: 83/84 MS: 1 ShuffleBytes- 00:07:23.883 [2024-11-26 18:57:41.036937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.883 [2024-11-26 18:57:41.036965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:41.037023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.883 [2024-11-26 18:57:41.037046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:41.037109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.883 [2024-11-26 18:57:41.037130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.883 [2024-11-26 18:57:41.037194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.883 [2024-11-26 18:57:41.037212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.883 #43 NEW cov: 12586 ft: 15878 corp: 28/1746b lim: 85 exec/s: 43 rss: 76Mb L: 79/84 MS: 1 InsertRepeatedBytes- 00:07:24.143 [2024-11-26 18:57:41.097117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.143 [2024-11-26 18:57:41.097146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.097204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.143 [2024-11-26 18:57:41.097226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.097289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.143 [2024-11-26 18:57:41.097310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.097373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.143 [2024-11-26 18:57:41.097392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.143 #44 NEW cov: 12586 ft: 15884 corp: 29/1821b lim: 85 exec/s: 44 rss: 76Mb L: 75/84 MS: 1 ChangeBit- 00:07:24.143 [2024-11-26 18:57:41.156993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.143 [2024-11-26 18:57:41.157020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.157100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.143 [2024-11-26 18:57:41.157123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.143 #45 NEW cov: 12586 ft: 15896 corp: 30/1871b lim: 85 exec/s: 45 rss: 76Mb L: 50/84 MS: 1 EraseBytes- 00:07:24.143 [2024-11-26 18:57:41.197399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.143 [2024-11-26 18:57:41.197431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.197499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.143 [2024-11-26 18:57:41.197520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.197583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.143 [2024-11-26 18:57:41.197606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.197671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.143 [2024-11-26 18:57:41.197689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.143 #46 NEW cov: 12586 ft: 15944 corp: 31/1946b lim: 85 exec/s: 46 rss: 77Mb L: 75/84 MS: 1 ChangeByte- 00:07:24.143 [2024-11-26 18:57:41.257727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.143 [2024-11-26 18:57:41.257754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.257808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.143 [2024-11-26 18:57:41.257828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.257892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.143 [2024-11-26 18:57:41.257915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.257980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.143 [2024-11-26 18:57:41.257999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.258061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:24.143 [2024-11-26 18:57:41.258080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.143 #47 NEW cov: 12586 ft: 15989 corp: 32/2031b lim: 85 exec/s: 47 rss: 77Mb L: 85/85 MS: 1 CopyPart- 00:07:24.143 [2024-11-26 18:57:41.317767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.143 [2024-11-26 18:57:41.317795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.317847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.143 [2024-11-26 18:57:41.317868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.317932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.143 [2024-11-26 18:57:41.317956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.143 [2024-11-26 18:57:41.318019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.143 [2024-11-26 18:57:41.318038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.143 #48 NEW cov: 12586 ft: 16016 corp: 33/2114b lim: 85 exec/s: 48 rss: 77Mb L: 83/85 MS: 1 ChangeBit- 00:07:24.403 [2024-11-26 18:57:41.357930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.403 [2024-11-26 18:57:41.357963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.403 [2024-11-26 18:57:41.358026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.403 [2024-11-26 18:57:41.358048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.403 [2024-11-26 18:57:41.358113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.403 [2024-11-26 18:57:41.358133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.403 [2024-11-26 18:57:41.358199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.403 [2024-11-26 18:57:41.358219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.403 #50 NEW cov: 12586 ft: 16019 corp: 34/2190b lim: 85 exec/s: 50 rss: 77Mb L: 76/85 MS: 2 CrossOver-CrossOver- 00:07:24.403 [2024-11-26 18:57:41.397855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.403 [2024-11-26 18:57:41.397883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.403 [2024-11-26 18:57:41.397941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.403 [2024-11-26 18:57:41.397964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.403 [2024-11-26 18:57:41.398029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.403 [2024-11-26 18:57:41.398051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.403 #51 NEW cov: 12586 ft: 16023 corp: 35/2248b lim: 85 exec/s: 25 rss: 77Mb L: 58/85 MS: 1 CopyPart- 00:07:24.403 #51 DONE cov: 12586 ft: 16023 corp: 35/2248b lim: 85 exec/s: 25 rss: 77Mb 00:07:24.403 ###### Recommended dictionary. ###### 00:07:24.403 "\376\377" # Uses: 3 00:07:24.403 ###### End of recommended dictionary. ###### 00:07:24.403 Done 51 runs in 2 second(s) 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.403 18:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:24.403 [2024-11-26 18:57:41.594482] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:24.403 [2024-11-26 18:57:41.594550] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744876 ] 00:07:24.662 [2024-11-26 18:57:41.779820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.662 [2024-11-26 18:57:41.819083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.920 [2024-11-26 18:57:41.878044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.920 [2024-11-26 18:57:41.894200] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:24.920 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.920 INFO: Seed: 2539609895 00:07:24.920 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:24.920 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:24.920 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.920 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.920 #2 INITED exec/s: 0 rss: 67Mb 00:07:24.920 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.920 This may also happen if the target rejected all inputs we tried so far 00:07:24.920 [2024-11-26 18:57:41.941785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.920 [2024-11-26 18:57:41.941817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.179 NEW_FUNC[1/717]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:25.179 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.179 #9 NEW cov: 12292 ft: 12288 corp: 2/6b lim: 25 exec/s: 0 rss: 74Mb L: 5/5 MS: 2 ChangeByte-CMP- DE: "s\000\000\000"- 00:07:25.179 [2024-11-26 18:57:42.262899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.179 [2024-11-26 18:57:42.262946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.179 [2024-11-26 18:57:42.263023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.179 [2024-11-26 18:57:42.263050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.179 [2024-11-26 18:57:42.263127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.179 [2024-11-26 18:57:42.263153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.179 #11 NEW cov: 12405 ft: 13242 corp: 3/24b lim: 25 exec/s: 0 rss: 75Mb L: 18/18 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:25.179 [2024-11-26 18:57:42.302635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.180 [2024-11-26 18:57:42.302666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.180 #14 NEW cov: 12411 ft: 13654 corp: 4/29b lim: 25 exec/s: 0 rss: 75Mb L: 5/18 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:07:25.180 [2024-11-26 18:57:42.363030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.180 [2024-11-26 18:57:42.363059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.180 [2024-11-26 18:57:42.363122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.180 [2024-11-26 18:57:42.363143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.180 [2024-11-26 18:57:42.363211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.180 [2024-11-26 18:57:42.363233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.440 #15 NEW cov: 12496 ft: 13921 corp: 5/48b lim: 25 exec/s: 0 rss: 75Mb L: 19/19 MS: 1 InsertByte- 00:07:25.440 [2024-11-26 18:57:42.423290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.440 [2024-11-26 18:57:42.423319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.423379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.440 [2024-11-26 18:57:42.423400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.423468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.440 [2024-11-26 18:57:42.423490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.423556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.440 [2024-11-26 18:57:42.423574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.440 #16 NEW cov: 12496 ft: 14456 corp: 6/72b lim: 25 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:25.440 [2024-11-26 18:57:42.483578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.440 [2024-11-26 18:57:42.483609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.483664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.440 [2024-11-26 18:57:42.483686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.483753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.440 [2024-11-26 18:57:42.483774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.483839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.440 [2024-11-26 18:57:42.483859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.483925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:25.440 [2024-11-26 18:57:42.483947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:25.440 #17 NEW cov: 12496 ft: 14645 corp: 7/97b lim: 25 exec/s: 0 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:07:25.440 [2024-11-26 18:57:42.543612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.440 [2024-11-26 18:57:42.543643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.543720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.440 [2024-11-26 18:57:42.543742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.543812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.440 [2024-11-26 18:57:42.543833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.543900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.440 [2024-11-26 18:57:42.543919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.440 #18 NEW cov: 12496 ft: 14739 corp: 8/121b lim: 25 exec/s: 0 rss: 75Mb L: 24/25 MS: 1 CrossOver- 00:07:25.440 [2024-11-26 18:57:42.583868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.440 [2024-11-26 18:57:42.583897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.583951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.440 [2024-11-26 18:57:42.583973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.584039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.440 [2024-11-26 18:57:42.584061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.584125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.440 [2024-11-26 18:57:42.584146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.584209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:25.440 [2024-11-26 18:57:42.584227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:25.440 #19 NEW cov: 12496 ft: 14806 corp: 9/146b lim: 25 exec/s: 0 rss: 75Mb L: 25/25 MS: 1 ChangeByte- 00:07:25.440 [2024-11-26 18:57:42.643943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.440 [2024-11-26 18:57:42.643972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.644035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.440 [2024-11-26 18:57:42.644057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.440 [2024-11-26 18:57:42.644128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.440 [2024-11-26 18:57:42.644150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.700 #20 NEW cov: 12496 ft: 14849 corp: 10/164b lim: 25 exec/s: 0 rss: 75Mb L: 18/25 MS: 1 ChangeByte- 00:07:25.700 [2024-11-26 18:57:42.683651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.700 [2024-11-26 18:57:42.683680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.700 #22 NEW cov: 12496 ft: 14988 corp: 11/170b lim: 25 exec/s: 0 rss: 75Mb L: 6/25 MS: 2 CopyPart-CMP- DE: "\015\000\000\000"- 00:07:25.700 [2024-11-26 18:57:42.723767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.700 [2024-11-26 18:57:42.723795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.700 #23 NEW cov: 12496 ft: 15037 corp: 12/176b lim: 25 exec/s: 0 rss: 75Mb L: 6/25 MS: 1 InsertByte- 00:07:25.700 [2024-11-26 18:57:42.764165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.700 [2024-11-26 18:57:42.764194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.764254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.700 [2024-11-26 18:57:42.764275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.764342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.700 [2024-11-26 18:57:42.764365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.700 #24 NEW cov: 12496 ft: 15074 corp: 13/194b lim: 25 exec/s: 0 rss: 75Mb L: 18/25 MS: 1 CMP- DE: "\001I\204\336L\313\305\322"- 00:07:25.700 [2024-11-26 18:57:42.824092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.700 [2024-11-26 18:57:42.824122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.700 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:25.700 #25 NEW cov: 12519 ft: 15153 corp: 14/200b lim: 25 exec/s: 0 rss: 75Mb L: 6/25 MS: 1 InsertByte- 00:07:25.700 [2024-11-26 18:57:42.864691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.700 [2024-11-26 18:57:42.864720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.864771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.700 [2024-11-26 18:57:42.864792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.864858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.700 [2024-11-26 18:57:42.864880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.864943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.700 [2024-11-26 18:57:42.864960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.865026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:25.700 [2024-11-26 18:57:42.865044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:25.700 #26 NEW cov: 12519 ft: 15173 corp: 15/225b lim: 25 exec/s: 0 rss: 75Mb L: 25/25 MS: 1 CrossOver- 00:07:25.700 [2024-11-26 18:57:42.904775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.700 [2024-11-26 18:57:42.904804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.904858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.700 [2024-11-26 18:57:42.904880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.904946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.700 [2024-11-26 18:57:42.904972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.905038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.700 [2024-11-26 18:57:42.905057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.700 [2024-11-26 18:57:42.905125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:25.700 [2024-11-26 18:57:42.905144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:25.960 #27 NEW cov: 12519 ft: 15243 corp: 16/250b lim: 25 exec/s: 27 rss: 75Mb L: 25/25 MS: 1 CrossOver- 00:07:25.960 [2024-11-26 18:57:42.964722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.960 [2024-11-26 18:57:42.964752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:42.964812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.960 [2024-11-26 18:57:42.964833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:42.964901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.960 [2024-11-26 18:57:42.964920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.960 #28 NEW cov: 12519 ft: 15265 corp: 17/268b lim: 25 exec/s: 28 rss: 75Mb L: 18/25 MS: 1 ChangeBinInt- 00:07:25.960 [2024-11-26 18:57:43.024978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.960 [2024-11-26 18:57:43.025008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:43.025065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.960 [2024-11-26 18:57:43.025087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:43.025154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.960 [2024-11-26 18:57:43.025177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:43.025243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.960 [2024-11-26 18:57:43.025263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.960 #29 NEW cov: 12519 ft: 15275 corp: 18/292b lim: 25 exec/s: 29 rss: 76Mb L: 24/25 MS: 1 EraseBytes- 00:07:25.960 [2024-11-26 18:57:43.085032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.960 [2024-11-26 18:57:43.085061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:43.085122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.960 [2024-11-26 18:57:43.085144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:43.085211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.960 [2024-11-26 18:57:43.085232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.960 #30 NEW cov: 12519 ft: 15300 corp: 19/311b lim: 25 exec/s: 30 rss: 76Mb L: 19/25 MS: 1 ChangeBinInt- 00:07:25.960 [2024-11-26 18:57:43.145192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.960 [2024-11-26 18:57:43.145220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:43.145282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.960 [2024-11-26 18:57:43.145303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.960 [2024-11-26 18:57:43.145369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.960 [2024-11-26 18:57:43.145392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.960 #31 NEW cov: 12519 ft: 15307 corp: 20/328b lim: 25 exec/s: 31 rss: 76Mb L: 17/25 MS: 1 EraseBytes- 00:07:26.220 [2024-11-26 18:57:43.185085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.220 [2024-11-26 18:57:43.185115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.220 #32 NEW cov: 12519 ft: 15380 corp: 21/333b lim: 25 exec/s: 32 rss: 76Mb L: 5/25 MS: 1 EraseBytes- 00:07:26.220 [2024-11-26 18:57:43.245729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.220 [2024-11-26 18:57:43.245757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.245815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.220 [2024-11-26 18:57:43.245836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.245905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.220 [2024-11-26 18:57:43.245924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.245990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.220 [2024-11-26 18:57:43.246010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.246076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.220 [2024-11-26 18:57:43.246095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.220 #33 NEW cov: 12519 ft: 15392 corp: 22/358b lim: 25 exec/s: 33 rss: 76Mb L: 25/25 MS: 1 PersAutoDict- DE: "\001I\204\336L\313\305\322"- 00:07:26.220 [2024-11-26 18:57:43.305554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.220 [2024-11-26 18:57:43.305582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.305647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.220 [2024-11-26 18:57:43.305668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.220 #34 NEW cov: 12519 ft: 15638 corp: 23/368b lim: 25 exec/s: 34 rss: 76Mb L: 10/25 MS: 1 InsertRepeatedBytes- 00:07:26.220 [2024-11-26 18:57:43.365678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.220 [2024-11-26 18:57:43.365707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.365783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.220 [2024-11-26 18:57:43.365805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.220 #35 NEW cov: 12519 ft: 15656 corp: 24/378b lim: 25 exec/s: 35 rss: 76Mb L: 10/25 MS: 1 CrossOver- 00:07:26.220 [2024-11-26 18:57:43.426256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.220 [2024-11-26 18:57:43.426285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.426339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.220 [2024-11-26 18:57:43.426361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.426427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.220 [2024-11-26 18:57:43.426448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.426519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.220 [2024-11-26 18:57:43.426540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.220 [2024-11-26 18:57:43.426621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.220 [2024-11-26 18:57:43.426641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.480 #36 NEW cov: 12519 ft: 15712 corp: 25/403b lim: 25 exec/s: 36 rss: 76Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:26.480 [2024-11-26 18:57:43.465972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.480 [2024-11-26 18:57:43.466001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.480 [2024-11-26 18:57:43.466067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.480 [2024-11-26 18:57:43.466088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.480 #37 NEW cov: 12519 ft: 15720 corp: 26/417b lim: 25 exec/s: 37 rss: 77Mb L: 14/25 MS: 1 PersAutoDict- DE: "\001I\204\336L\313\305\322"- 00:07:26.480 [2024-11-26 18:57:43.526453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.480 [2024-11-26 18:57:43.526487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.480 [2024-11-26 18:57:43.526547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.480 [2024-11-26 18:57:43.526568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.480 [2024-11-26 18:57:43.526636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.480 [2024-11-26 18:57:43.526655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.480 [2024-11-26 18:57:43.526721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.480 [2024-11-26 18:57:43.526741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.480 [2024-11-26 18:57:43.526804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.480 [2024-11-26 18:57:43.526823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.480 #38 NEW cov: 12519 ft: 15730 corp: 27/442b lim: 25 exec/s: 38 rss: 77Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:26.480 [2024-11-26 18:57:43.586187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.480 [2024-11-26 18:57:43.586216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.480 #41 NEW cov: 12519 ft: 15769 corp: 28/449b lim: 25 exec/s: 41 rss: 77Mb L: 7/25 MS: 3 CrossOver-CopyPart-CMP- DE: "\377\377\377\002"- 00:07:26.480 [2024-11-26 18:57:43.626295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.480 [2024-11-26 18:57:43.626324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.481 #42 NEW cov: 12519 ft: 15786 corp: 29/454b lim: 25 exec/s: 42 rss: 77Mb L: 5/25 MS: 1 ChangeByte- 00:07:26.481 [2024-11-26 18:57:43.666886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.481 [2024-11-26 18:57:43.666914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.481 [2024-11-26 18:57:43.666971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.481 [2024-11-26 18:57:43.666992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.481 [2024-11-26 18:57:43.667061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.481 [2024-11-26 18:57:43.667083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.481 [2024-11-26 18:57:43.667150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.481 [2024-11-26 18:57:43.667172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.481 [2024-11-26 18:57:43.667239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.481 [2024-11-26 18:57:43.667260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.740 #43 NEW cov: 12519 ft: 15850 corp: 30/479b lim: 25 exec/s: 43 rss: 77Mb L: 25/25 MS: 1 PersAutoDict- DE: "s\000\000\000"- 00:07:26.740 [2024-11-26 18:57:43.726827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.740 [2024-11-26 18:57:43.726855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.726917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.740 [2024-11-26 18:57:43.726938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.727005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.740 [2024-11-26 18:57:43.727026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.740 #44 NEW cov: 12519 ft: 15866 corp: 31/497b lim: 25 exec/s: 44 rss: 77Mb L: 18/25 MS: 1 ChangeBinInt- 00:07:26.740 [2024-11-26 18:57:43.767182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.740 [2024-11-26 18:57:43.767210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.767268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.740 [2024-11-26 18:57:43.767290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.767363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.740 [2024-11-26 18:57:43.767383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.767449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.740 [2024-11-26 18:57:43.767467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.767539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.740 [2024-11-26 18:57:43.767559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.740 #45 NEW cov: 12519 ft: 15883 corp: 32/522b lim: 25 exec/s: 45 rss: 77Mb L: 25/25 MS: 1 CrossOver- 00:07:26.740 [2024-11-26 18:57:43.827243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.740 [2024-11-26 18:57:43.827271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.827324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.740 [2024-11-26 18:57:43.827346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.827411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.740 [2024-11-26 18:57:43.827434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.827505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.740 [2024-11-26 18:57:43.827524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.740 #46 NEW cov: 12519 ft: 15898 corp: 33/545b lim: 25 exec/s: 46 rss: 77Mb L: 23/25 MS: 1 EraseBytes- 00:07:26.740 [2024-11-26 18:57:43.867101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.740 [2024-11-26 18:57:43.867130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.867200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.740 [2024-11-26 18:57:43.867224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.740 #47 NEW cov: 12519 ft: 15909 corp: 34/559b lim: 25 exec/s: 47 rss: 77Mb L: 14/25 MS: 1 ShuffleBytes- 00:07:26.740 [2024-11-26 18:57:43.927359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.740 [2024-11-26 18:57:43.927388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.927445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.740 [2024-11-26 18:57:43.927467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.740 [2024-11-26 18:57:43.927544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.740 [2024-11-26 18:57:43.927566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.999 #48 NEW cov: 12519 ft: 15925 corp: 35/577b lim: 25 exec/s: 24 rss: 77Mb L: 18/25 MS: 1 ChangeByte- 00:07:26.999 #48 DONE cov: 12519 ft: 15925 corp: 35/577b lim: 25 exec/s: 24 rss: 77Mb 00:07:26.999 ###### Recommended dictionary. ###### 00:07:26.999 "s\000\000\000" # Uses: 1 00:07:26.999 "\015\000\000\000" # Uses: 0 00:07:26.999 "\001I\204\336L\313\305\322" # Uses: 2 00:07:26.999 "\377\377\377\002" # Uses: 0 00:07:26.999 ###### End of recommended dictionary. ###### 00:07:26.999 Done 48 runs in 2 second(s) 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.999 18:57:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:26.999 [2024-11-26 18:57:44.125612] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:26.999 [2024-11-26 18:57:44.125681] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745176 ] 00:07:27.258 [2024-11-26 18:57:44.315251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.258 [2024-11-26 18:57:44.353713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.258 [2024-11-26 18:57:44.413048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.258 [2024-11-26 18:57:44.429193] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:27.258 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.258 INFO: Seed: 782608959 00:07:27.258 INFO: Loaded 1 modules (389554 inline 8-bit counters): 389554 [0x2c6b24c, 0x2cca3fe), 00:07:27.258 INFO: Loaded 1 PC tables (389554 PCs): 389554 [0x2cca400,0x32bbf20), 00:07:27.258 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:27.259 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.259 #2 INITED exec/s: 0 rss: 67Mb 00:07:27.259 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.259 This may also happen if the target rejected all inputs we tried so far 00:07:27.518 [2024-11-26 18:57:44.484579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.518 [2024-11-26 18:57:44.484615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.777 NEW_FUNC[1/718]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:27.777 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.777 #11 NEW cov: 12364 ft: 12363 corp: 2/28b lim: 100 exec/s: 0 rss: 74Mb L: 27/27 MS: 4 CopyPart-EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:27.777 [2024-11-26 18:57:44.805378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:11225 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.777 [2024-11-26 18:57:44.805422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.777 #12 NEW cov: 12477 ft: 12895 corp: 3/56b lim: 100 exec/s: 0 rss: 75Mb L: 28/28 MS: 1 InsertByte- 00:07:27.777 [2024-11-26 18:57:44.865439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.777 [2024-11-26 18:57:44.865469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.777 #18 NEW cov: 12483 ft: 13206 corp: 4/83b lim: 100 exec/s: 0 rss: 75Mb L: 27/28 MS: 1 ShuffleBytes- 00:07:27.777 [2024-11-26 18:57:44.905509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.777 [2024-11-26 18:57:44.905539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.777 #19 NEW cov: 12568 ft: 13458 corp: 5/111b lim: 100 exec/s: 0 rss: 75Mb L: 28/28 MS: 1 CrossOver- 00:07:27.777 [2024-11-26 18:57:44.965648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.777 [2024-11-26 18:57:44.965677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.035 #20 NEW cov: 12568 ft: 13598 corp: 6/139b lim: 100 exec/s: 0 rss: 75Mb L: 28/28 MS: 1 ChangeByte- 00:07:28.035 [2024-11-26 18:57:45.025809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.035 [2024-11-26 18:57:45.025837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.035 #21 NEW cov: 12568 ft: 13666 corp: 7/167b lim: 100 exec/s: 0 rss: 75Mb L: 28/28 MS: 1 ShuffleBytes- 00:07:28.035 [2024-11-26 18:57:45.085993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.035 [2024-11-26 18:57:45.086022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.035 #22 NEW cov: 12568 ft: 13766 corp: 8/196b lim: 100 exec/s: 0 rss: 75Mb L: 29/29 MS: 1 InsertByte- 00:07:28.035 [2024-11-26 18:57:45.126406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.035 [2024-11-26 18:57:45.126435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.036 [2024-11-26 18:57:45.126504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10923366099644373143 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.036 [2024-11-26 18:57:45.126528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.036 [2024-11-26 18:57:45.126595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10923366098549577623 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.036 [2024-11-26 18:57:45.126621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.036 #23 NEW cov: 12568 ft: 14649 corp: 9/267b lim: 100 exec/s: 0 rss: 75Mb L: 71/71 MS: 1 InsertRepeatedBytes- 00:07:28.036 [2024-11-26 18:57:45.166230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:11225 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.036 [2024-11-26 18:57:45.166258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.036 #24 NEW cov: 12568 ft: 14669 corp: 10/295b lim: 100 exec/s: 0 rss: 75Mb L: 28/71 MS: 1 ChangeByte- 00:07:28.036 [2024-11-26 18:57:45.206359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.036 [2024-11-26 18:57:45.206387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.036 #25 NEW cov: 12568 ft: 14713 corp: 11/323b lim: 100 exec/s: 0 rss: 75Mb L: 28/71 MS: 1 ChangeBit- 00:07:28.036 [2024-11-26 18:57:45.246447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.036 [2024-11-26 18:57:45.246482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.295 #26 NEW cov: 12568 ft: 14738 corp: 12/351b lim: 100 exec/s: 0 rss: 75Mb L: 28/71 MS: 1 ChangeByte- 00:07:28.295 [2024-11-26 18:57:45.286605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.295 [2024-11-26 18:57:45.286634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.295 #27 NEW cov: 12568 ft: 14817 corp: 13/380b lim: 100 exec/s: 0 rss: 75Mb L: 29/71 MS: 1 ChangeBit- 00:07:28.295 [2024-11-26 18:57:45.347196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.295 [2024-11-26 18:57:45.347224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.295 [2024-11-26 18:57:45.347284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15625477333024561368 len:55497 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.295 [2024-11-26 18:57:45.347306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.295 [2024-11-26 18:57:45.347369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10923366098549577623 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.295 [2024-11-26 18:57:45.347390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.295 [2024-11-26 18:57:45.347475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:10923366098549577623 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.295 [2024-11-26 18:57:45.347495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.295 NEW_FUNC[1/1]: 0x1c47058 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:28.295 #38 NEW cov: 12591 ft: 15221 corp: 14/464b lim: 100 exec/s: 0 rss: 75Mb L: 84/84 MS: 1 CrossOver- 00:07:28.295 [2024-11-26 18:57:45.406930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.295 [2024-11-26 18:57:45.406958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.295 #39 NEW cov: 12591 ft: 15316 corp: 15/492b lim: 100 exec/s: 0 rss: 75Mb L: 28/84 MS: 1 ShuffleBytes- 00:07:28.295 [2024-11-26 18:57:45.467077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:11225 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.295 [2024-11-26 18:57:45.467107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.554 #40 NEW cov: 12591 ft: 15366 corp: 16/520b lim: 100 exec/s: 40 rss: 75Mb L: 28/84 MS: 1 ChangeBit- 00:07:28.554 [2024-11-26 18:57:45.527261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.527290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.554 #41 NEW cov: 12591 ft: 15424 corp: 17/548b lim: 100 exec/s: 41 rss: 75Mb L: 28/84 MS: 1 ShuffleBytes- 00:07:28.554 [2024-11-26 18:57:45.587611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.587642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.554 [2024-11-26 18:57:45.587709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.587730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.554 #42 NEW cov: 12591 ft: 15738 corp: 18/605b lim: 100 exec/s: 42 rss: 76Mb L: 57/84 MS: 1 CrossOver- 00:07:28.554 [2024-11-26 18:57:45.647607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.647637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.554 #43 NEW cov: 12591 ft: 15760 corp: 19/633b lim: 100 exec/s: 43 rss: 76Mb L: 28/84 MS: 1 InsertByte- 00:07:28.554 [2024-11-26 18:57:45.708045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10634005405787984787 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.708074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.554 [2024-11-26 18:57:45.708130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10634005407197270931 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.708152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.554 [2024-11-26 18:57:45.708219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10634005407197270931 len:37780 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.708243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.554 #48 NEW cov: 12591 ft: 15764 corp: 20/709b lim: 100 exec/s: 48 rss: 76Mb L: 76/84 MS: 5 InsertByte-EraseBytes-ChangeBinInt-ChangeBinInt-InsertRepeatedBytes- 00:07:28.554 [2024-11-26 18:57:45.747873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15620973733397190872 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.554 [2024-11-26 18:57:45.747902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.812 #49 NEW cov: 12591 ft: 15796 corp: 21/737b lim: 100 exec/s: 49 rss: 76Mb L: 28/84 MS: 1 ChangeBit- 00:07:28.813 [2024-11-26 18:57:45.807984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15572322542891358424 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.813 [2024-11-26 18:57:45.808016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.813 #50 NEW cov: 12591 ft: 15827 corp: 22/765b lim: 100 exec/s: 50 rss: 76Mb L: 28/84 MS: 1 ChangeBinInt- 00:07:28.813 [2024-11-26 18:57:45.848132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.813 [2024-11-26 18:57:45.848160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.813 #51 NEW cov: 12591 ft: 15831 corp: 23/797b lim: 100 exec/s: 51 rss: 76Mb L: 32/84 MS: 1 CopyPart- 00:07:28.813 [2024-11-26 18:57:45.888243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.813 [2024-11-26 18:57:45.888271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.813 #55 NEW cov: 12591 ft: 15845 corp: 24/833b lim: 100 exec/s: 55 rss: 76Mb L: 36/84 MS: 4 CrossOver-CopyPart-ChangeBit-CrossOver- 00:07:28.813 [2024-11-26 18:57:45.948396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.813 [2024-11-26 18:57:45.948424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.813 #66 NEW cov: 12591 ft: 15891 corp: 25/861b lim: 100 exec/s: 66 rss: 76Mb L: 28/84 MS: 1 ChangeBit- 00:07:28.813 [2024-11-26 18:57:45.988816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:26729 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.813 [2024-11-26 18:57:45.988844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.813 [2024-11-26 18:57:45.988904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7523377975159973992 len:26729 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.813 [2024-11-26 18:57:45.988925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.813 [2024-11-26 18:57:45.988990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7523377975159973992 len:26729 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.813 [2024-11-26 18:57:45.989011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.813 #67 NEW cov: 12591 ft: 15911 corp: 26/934b lim: 100 exec/s: 67 rss: 76Mb L: 73/84 MS: 1 InsertRepeatedBytes- 00:07:29.071 [2024-11-26 18:57:46.028630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.071 [2024-11-26 18:57:46.028660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.071 #68 NEW cov: 12591 ft: 15961 corp: 27/962b lim: 100 exec/s: 68 rss: 76Mb L: 28/84 MS: 1 EraseBytes- 00:07:29.071 [2024-11-26 18:57:46.068898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.071 [2024-11-26 18:57:46.068926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.072 [2024-11-26 18:57:46.068990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.072 [2024-11-26 18:57:46.069012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.072 #69 NEW cov: 12591 ft: 15970 corp: 28/1010b lim: 100 exec/s: 69 rss: 76Mb L: 48/84 MS: 1 CrossOver- 00:07:29.072 [2024-11-26 18:57:46.108880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15620973733397190872 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.072 [2024-11-26 18:57:46.108908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.072 #70 NEW cov: 12591 ft: 16032 corp: 29/1034b lim: 100 exec/s: 70 rss: 76Mb L: 24/84 MS: 1 EraseBytes- 00:07:29.072 [2024-11-26 18:57:46.168988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.072 [2024-11-26 18:57:46.169016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.072 #71 NEW cov: 12591 ft: 16037 corp: 30/1061b lim: 100 exec/s: 71 rss: 76Mb L: 27/84 MS: 1 EraseBytes- 00:07:29.072 [2024-11-26 18:57:46.229175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.072 [2024-11-26 18:57:46.229204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.072 #72 NEW cov: 12591 ft: 16068 corp: 31/1090b lim: 100 exec/s: 72 rss: 76Mb L: 29/84 MS: 1 CrossOver- 00:07:29.072 [2024-11-26 18:57:46.269283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.072 [2024-11-26 18:57:46.269312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.330 #73 NEW cov: 12591 ft: 16100 corp: 32/1118b lim: 100 exec/s: 73 rss: 76Mb L: 28/84 MS: 1 ShuffleBytes- 00:07:29.330 [2024-11-26 18:57:46.309430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.309458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.330 #74 NEW cov: 12591 ft: 16130 corp: 33/1147b lim: 100 exec/s: 74 rss: 77Mb L: 29/84 MS: 1 CopyPart- 00:07:29.330 [2024-11-26 18:57:46.369736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.369764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.330 [2024-11-26 18:57:46.369830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15625250833629239512 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.369851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.330 #75 NEW cov: 12591 ft: 16143 corp: 34/1199b lim: 100 exec/s: 75 rss: 77Mb L: 52/84 MS: 1 CrossOver- 00:07:29.330 [2024-11-26 18:57:46.409682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.409711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.330 #76 NEW cov: 12591 ft: 16155 corp: 35/1228b lim: 100 exec/s: 76 rss: 77Mb L: 29/84 MS: 1 InsertByte- 00:07:29.330 [2024-11-26 18:57:46.450383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477333024561368 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.450414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.330 [2024-11-26 18:57:46.450476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15625477333024561368 len:55497 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.450503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.330 [2024-11-26 18:57:46.450583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:10923366098549577623 len:38873 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.450607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.330 [2024-11-26 18:57:46.450674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15625304430526126296 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.450694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.330 [2024-11-26 18:57:46.450760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:10923366098549577623 len:38808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.330 [2024-11-26 18:57:46.450779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:29.330 #77 NEW cov: 12591 ft: 16216 corp: 36/1328b lim: 100 exec/s: 38 rss: 77Mb L: 100/100 MS: 1 CrossOver- 00:07:29.330 #77 DONE cov: 12591 ft: 16216 corp: 36/1328b lim: 100 exec/s: 38 rss: 77Mb 00:07:29.330 Done 77 runs in 2 second(s) 00:07:29.589 18:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.589 18:57:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.589 18:57:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.589 18:57:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:29.589 00:07:29.589 real 1m3.393s 00:07:29.589 user 1m39.845s 00:07:29.589 sys 0m7.102s 00:07:29.589 18:57:46 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.589 18:57:46 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.589 ************************************ 00:07:29.589 END TEST nvmf_llvm_fuzz 00:07:29.589 ************************************ 00:07:29.589 18:57:46 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:29.589 18:57:46 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:29.589 18:57:46 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:29.589 18:57:46 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.589 18:57:46 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.589 18:57:46 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.589 ************************************ 00:07:29.589 START TEST vfio_llvm_fuzz 00:07:29.589 ************************************ 00:07:29.589 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:29.589 * Looking for test storage... 00:07:29.589 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.589 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.589 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.589 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.851 --rc genhtml_branch_coverage=1 00:07:29.851 --rc genhtml_function_coverage=1 00:07:29.851 --rc genhtml_legend=1 00:07:29.851 --rc geninfo_all_blocks=1 00:07:29.851 --rc geninfo_unexecuted_blocks=1 00:07:29.851 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.851 ' 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.851 --rc genhtml_branch_coverage=1 00:07:29.851 --rc genhtml_function_coverage=1 00:07:29.851 --rc genhtml_legend=1 00:07:29.851 --rc geninfo_all_blocks=1 00:07:29.851 --rc geninfo_unexecuted_blocks=1 00:07:29.851 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.851 ' 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.851 --rc genhtml_branch_coverage=1 00:07:29.851 --rc genhtml_function_coverage=1 00:07:29.851 --rc genhtml_legend=1 00:07:29.851 --rc geninfo_all_blocks=1 00:07:29.851 --rc geninfo_unexecuted_blocks=1 00:07:29.851 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.851 ' 00:07:29.851 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.851 --rc genhtml_branch_coverage=1 00:07:29.851 --rc genhtml_function_coverage=1 00:07:29.851 --rc genhtml_legend=1 00:07:29.851 --rc geninfo_all_blocks=1 00:07:29.851 --rc geninfo_unexecuted_blocks=1 00:07:29.851 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.851 ' 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.852 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:29.853 #define SPDK_CONFIG_H 00:07:29.853 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:29.853 #define SPDK_CONFIG_APPS 1 00:07:29.853 #define SPDK_CONFIG_ARCH native 00:07:29.853 #undef SPDK_CONFIG_ASAN 00:07:29.853 #undef SPDK_CONFIG_AVAHI 00:07:29.853 #undef SPDK_CONFIG_CET 00:07:29.853 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:29.853 #define SPDK_CONFIG_COVERAGE 1 00:07:29.853 #define SPDK_CONFIG_CROSS_PREFIX 00:07:29.853 #undef SPDK_CONFIG_CRYPTO 00:07:29.853 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:29.853 #undef SPDK_CONFIG_CUSTOMOCF 00:07:29.853 #undef SPDK_CONFIG_DAOS 00:07:29.853 #define SPDK_CONFIG_DAOS_DIR 00:07:29.853 #define SPDK_CONFIG_DEBUG 1 00:07:29.853 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:29.853 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.853 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:29.853 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:29.853 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:29.853 #undef SPDK_CONFIG_DPDK_UADK 00:07:29.853 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.853 #define SPDK_CONFIG_EXAMPLES 1 00:07:29.853 #undef SPDK_CONFIG_FC 00:07:29.853 #define SPDK_CONFIG_FC_PATH 00:07:29.853 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:29.853 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:29.853 #define SPDK_CONFIG_FSDEV 1 00:07:29.853 #undef SPDK_CONFIG_FUSE 00:07:29.853 #define SPDK_CONFIG_FUZZER 1 00:07:29.853 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:29.853 #undef SPDK_CONFIG_GOLANG 00:07:29.853 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:29.853 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:29.853 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:29.853 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:29.853 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:29.853 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:29.853 #undef SPDK_CONFIG_HAVE_LZ4 00:07:29.853 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:29.853 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:29.853 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:29.853 #define SPDK_CONFIG_IDXD 1 00:07:29.853 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:29.853 #undef SPDK_CONFIG_IPSEC_MB 00:07:29.853 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:29.853 #define SPDK_CONFIG_ISAL 1 00:07:29.853 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:29.853 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:29.853 #define SPDK_CONFIG_LIBDIR 00:07:29.853 #undef SPDK_CONFIG_LTO 00:07:29.853 #define SPDK_CONFIG_MAX_LCORES 128 00:07:29.853 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:29.853 #define SPDK_CONFIG_NVME_CUSE 1 00:07:29.853 #undef SPDK_CONFIG_OCF 00:07:29.853 #define SPDK_CONFIG_OCF_PATH 00:07:29.853 #define SPDK_CONFIG_OPENSSL_PATH 00:07:29.853 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:29.853 #define SPDK_CONFIG_PGO_DIR 00:07:29.853 #undef SPDK_CONFIG_PGO_USE 00:07:29.853 #define SPDK_CONFIG_PREFIX /usr/local 00:07:29.853 #undef SPDK_CONFIG_RAID5F 00:07:29.853 #undef SPDK_CONFIG_RBD 00:07:29.853 #define SPDK_CONFIG_RDMA 1 00:07:29.853 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:29.853 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:29.853 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:29.853 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:29.853 #undef SPDK_CONFIG_SHARED 00:07:29.853 #undef SPDK_CONFIG_SMA 00:07:29.853 #define SPDK_CONFIG_TESTS 1 00:07:29.853 #undef SPDK_CONFIG_TSAN 00:07:29.853 #define SPDK_CONFIG_UBLK 1 00:07:29.853 #define SPDK_CONFIG_UBSAN 1 00:07:29.853 #undef SPDK_CONFIG_UNIT_TESTS 00:07:29.853 #undef SPDK_CONFIG_URING 00:07:29.853 #define SPDK_CONFIG_URING_PATH 00:07:29.853 #undef SPDK_CONFIG_URING_ZNS 00:07:29.853 #undef SPDK_CONFIG_USDT 00:07:29.853 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:29.853 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:29.853 #define SPDK_CONFIG_VFIO_USER 1 00:07:29.853 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:29.853 #define SPDK_CONFIG_VHOST 1 00:07:29.853 #define SPDK_CONFIG_VIRTIO 1 00:07:29.853 #undef SPDK_CONFIG_VTUNE 00:07:29.853 #define SPDK_CONFIG_VTUNE_DIR 00:07:29.853 #define SPDK_CONFIG_WERROR 1 00:07:29.853 #define SPDK_CONFIG_WPDK_DIR 00:07:29.853 #undef SPDK_CONFIG_XNVME 00:07:29.853 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:29.853 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:29.854 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 2745560 ]] 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 2745560 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:29.855 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2gWL5G 00:07:29.856 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:29.856 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:29.856 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:29.856 18:57:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.2gWL5G/tests/vfio /tmp/spdk.2gWL5G 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=86640017408 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=94500356096 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7860338688 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47245414400 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250178048 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=18894340096 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=18900074496 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5734400 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=47249727488 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=47250178048 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=450560 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=9450020864 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=9450033152 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:29.856 * Looking for test storage... 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=86640017408 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=10074931200 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.856 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.856 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.116 --rc genhtml_branch_coverage=1 00:07:30.116 --rc genhtml_function_coverage=1 00:07:30.116 --rc genhtml_legend=1 00:07:30.116 --rc geninfo_all_blocks=1 00:07:30.116 --rc geninfo_unexecuted_blocks=1 00:07:30.116 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.116 ' 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.116 --rc genhtml_branch_coverage=1 00:07:30.116 --rc genhtml_function_coverage=1 00:07:30.116 --rc genhtml_legend=1 00:07:30.116 --rc geninfo_all_blocks=1 00:07:30.116 --rc geninfo_unexecuted_blocks=1 00:07:30.116 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.116 ' 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.116 --rc genhtml_branch_coverage=1 00:07:30.116 --rc genhtml_function_coverage=1 00:07:30.116 --rc genhtml_legend=1 00:07:30.116 --rc geninfo_all_blocks=1 00:07:30.116 --rc geninfo_unexecuted_blocks=1 00:07:30.116 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.116 ' 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.116 --rc genhtml_branch_coverage=1 00:07:30.116 --rc genhtml_function_coverage=1 00:07:30.116 --rc genhtml_legend=1 00:07:30.116 --rc geninfo_all_blocks=1 00:07:30.116 --rc geninfo_unexecuted_blocks=1 00:07:30.116 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.116 ' 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:30.116 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:30.117 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:30.117 18:57:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:30.117 [2024-11-26 18:57:47.178164] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:30.117 [2024-11-26 18:57:47.178236] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745775 ] 00:07:30.117 [2024-11-26 18:57:47.258953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.117 [2024-11-26 18:57:47.303993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.376 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.376 INFO: Seed: 3826625363 00:07:30.376 INFO: Loaded 1 modules (386790 inline 8-bit counters): 386790 [0x2c2ca4c, 0x2c8b132), 00:07:30.376 INFO: Loaded 1 PC tables (386790 PCs): 386790 [0x2c8b138,0x3271f98), 00:07:30.376 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:30.376 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.376 #2 INITED exec/s: 0 rss: 67Mb 00:07:30.376 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.376 This may also happen if the target rejected all inputs we tried so far 00:07:30.376 [2024-11-26 18:57:47.544176] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:30.895 NEW_FUNC[1/676]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:30.895 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:30.895 #44 NEW cov: 11213 ft: 11182 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:30.895 #47 NEW cov: 11227 ft: 14143 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:07:31.155 #48 NEW cov: 11227 ft: 15351 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:07:31.155 NEW_FUNC[1/1]: 0x1c134a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:31.155 #49 NEW cov: 11244 ft: 15544 corp: 5/25b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:31.414 #50 NEW cov: 11244 ft: 15881 corp: 6/31b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:07:31.414 #52 NEW cov: 11244 ft: 16195 corp: 7/37b lim: 6 exec/s: 52 rss: 77Mb L: 6/6 MS: 2 CrossOver-CMP- DE: "\001\003"- 00:07:31.673 #53 NEW cov: 11244 ft: 16368 corp: 8/43b lim: 6 exec/s: 53 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:31.673 #54 NEW cov: 11244 ft: 17558 corp: 9/49b lim: 6 exec/s: 54 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:07:31.931 #55 NEW cov: 11244 ft: 17591 corp: 10/55b lim: 6 exec/s: 55 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:31.931 #56 NEW cov: 11244 ft: 17716 corp: 11/61b lim: 6 exec/s: 56 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:07:32.201 #62 NEW cov: 11244 ft: 17760 corp: 12/67b lim: 6 exec/s: 62 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:07:32.201 #63 NEW cov: 11244 ft: 17770 corp: 13/73b lim: 6 exec/s: 63 rss: 77Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:07:32.201 #64 NEW cov: 11251 ft: 17793 corp: 14/79b lim: 6 exec/s: 64 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:32.463 #65 NEW cov: 11251 ft: 17888 corp: 15/85b lim: 6 exec/s: 32 rss: 77Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:07:32.463 #65 DONE cov: 11251 ft: 17888 corp: 15/85b lim: 6 exec/s: 32 rss: 77Mb 00:07:32.463 ###### Recommended dictionary. ###### 00:07:32.463 "\001\003" # Uses: 0 00:07:32.463 ###### End of recommended dictionary. ###### 00:07:32.463 Done 65 runs in 2 second(s) 00:07:32.463 [2024-11-26 18:57:49.562688] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:32.722 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.722 18:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:32.722 [2024-11-26 18:57:49.840873] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:32.722 [2024-11-26 18:57:49.840944] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746135 ] 00:07:32.722 [2024-11-26 18:57:49.920900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.986 [2024-11-26 18:57:49.965873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.986 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.986 INFO: Seed: 2196659605 00:07:32.986 INFO: Loaded 1 modules (386790 inline 8-bit counters): 386790 [0x2c2ca4c, 0x2c8b132), 00:07:32.986 INFO: Loaded 1 PC tables (386790 PCs): 386790 [0x2c8b138,0x3271f98), 00:07:32.986 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.986 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.986 #2 INITED exec/s: 0 rss: 68Mb 00:07:32.986 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.986 This may also happen if the target rejected all inputs we tried so far 00:07:33.245 [2024-11-26 18:57:50.216136] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:33.245 [2024-11-26 18:57:50.235530] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.245 [2024-11-26 18:57:50.235559] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.245 [2024-11-26 18:57:50.235578] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.504 NEW_FUNC[1/678]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:33.504 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:33.504 #97 NEW cov: 11207 ft: 11163 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 ChangeBinInt-InsertByte-CrossOver-ChangeBit-CopyPart- 00:07:33.504 [2024-11-26 18:57:50.684382] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.504 [2024-11-26 18:57:50.684416] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.504 [2024-11-26 18:57:50.684435] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.763 #98 NEW cov: 11223 ft: 14513 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:07:33.763 [2024-11-26 18:57:50.849459] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.763 [2024-11-26 18:57:50.849501] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.763 [2024-11-26 18:57:50.849522] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.763 #99 NEW cov: 11223 ft: 16116 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:34.024 [2024-11-26 18:57:51.005145] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.024 [2024-11-26 18:57:51.005168] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.024 [2024-11-26 18:57:51.005188] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.024 NEW_FUNC[1/1]: 0x1c134a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:34.024 #105 NEW cov: 11240 ft: 16852 corp: 5/17b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:07:34.024 [2024-11-26 18:57:51.164298] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.024 [2024-11-26 18:57:51.164322] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.024 [2024-11-26 18:57:51.164340] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.284 #106 NEW cov: 11240 ft: 17317 corp: 6/21b lim: 4 exec/s: 106 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:34.284 [2024-11-26 18:57:51.322373] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.284 [2024-11-26 18:57:51.322397] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.284 [2024-11-26 18:57:51.322415] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.284 #110 NEW cov: 11240 ft: 17499 corp: 7/25b lim: 4 exec/s: 110 rss: 76Mb L: 4/4 MS: 4 ChangeBit-InsertByte-ShuffleBytes-CrossOver- 00:07:34.284 [2024-11-26 18:57:51.486950] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.284 [2024-11-26 18:57:51.486972] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.284 [2024-11-26 18:57:51.486991] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.543 #111 NEW cov: 11240 ft: 17648 corp: 8/29b lim: 4 exec/s: 111 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:34.543 [2024-11-26 18:57:51.641874] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.543 [2024-11-26 18:57:51.641896] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.543 [2024-11-26 18:57:51.641914] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.543 #112 NEW cov: 11240 ft: 17996 corp: 9/33b lim: 4 exec/s: 112 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:07:34.802 [2024-11-26 18:57:51.796776] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.802 [2024-11-26 18:57:51.796800] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.802 [2024-11-26 18:57:51.796819] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.802 #114 NEW cov: 11240 ft: 18111 corp: 10/37b lim: 4 exec/s: 114 rss: 76Mb L: 4/4 MS: 2 EraseBytes-CopyPart- 00:07:34.802 [2024-11-26 18:57:51.950669] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.802 [2024-11-26 18:57:51.950690] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.802 [2024-11-26 18:57:51.950709] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.061 #115 NEW cov: 11247 ft: 18174 corp: 11/41b lim: 4 exec/s: 115 rss: 76Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:35.061 [2024-11-26 18:57:52.105574] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:35.061 [2024-11-26 18:57:52.105597] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:35.061 [2024-11-26 18:57:52.105615] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.061 #116 NEW cov: 11247 ft: 18240 corp: 12/45b lim: 4 exec/s: 58 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:35.061 #116 DONE cov: 11247 ft: 18240 corp: 12/45b lim: 4 exec/s: 58 rss: 76Mb 00:07:35.061 Done 116 runs in 2 second(s) 00:07:35.061 [2024-11-26 18:57:52.221686] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:35.321 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.321 18:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:35.321 [2024-11-26 18:57:52.502117] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:35.321 [2024-11-26 18:57:52.502209] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746492 ] 00:07:35.580 [2024-11-26 18:57:52.584682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.580 [2024-11-26 18:57:52.629419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.839 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.839 INFO: Seed: 561695480 00:07:35.839 INFO: Loaded 1 modules (386790 inline 8-bit counters): 386790 [0x2c2ca4c, 0x2c8b132), 00:07:35.839 INFO: Loaded 1 PC tables (386790 PCs): 386790 [0x2c8b138,0x3271f98), 00:07:35.839 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.839 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.839 #2 INITED exec/s: 0 rss: 68Mb 00:07:35.839 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.839 This may also happen if the target rejected all inputs we tried so far 00:07:35.839 [2024-11-26 18:57:52.867905] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:35.839 [2024-11-26 18:57:52.910918] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.408 NEW_FUNC[1/677]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:36.408 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.408 #96 NEW cov: 11189 ft: 11147 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 4 ChangeBit-InsertRepeatedBytes-CopyPart-InsertByte- 00:07:36.408 [2024-11-26 18:57:53.378966] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.408 #102 NEW cov: 11206 ft: 14351 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:36.408 [2024-11-26 18:57:53.548565] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.668 NEW_FUNC[1/1]: 0x1c134a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:36.668 #103 NEW cov: 11223 ft: 15365 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:36.668 [2024-11-26 18:57:53.718769] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.668 #114 NEW cov: 11223 ft: 15849 corp: 5/33b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:36.927 [2024-11-26 18:57:53.880541] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.927 #115 NEW cov: 11223 ft: 16226 corp: 6/41b lim: 8 exec/s: 115 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:07:36.927 [2024-11-26 18:57:54.039695] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.927 #116 NEW cov: 11223 ft: 16421 corp: 7/49b lim: 8 exec/s: 116 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:37.186 [2024-11-26 18:57:54.200307] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: cmd 5 failed: Invalid argument 00:07:37.186 [2024-11-26 18:57:54.200346] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:37.186 NEW_FUNC[1/1]: 0x1590558 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3098 00:07:37.186 #117 NEW cov: 11233 ft: 16469 corp: 8/57b lim: 8 exec/s: 117 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:37.186 [2024-11-26 18:57:54.371751] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.445 #118 NEW cov: 11233 ft: 16846 corp: 9/65b lim: 8 exec/s: 118 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:37.445 [2024-11-26 18:57:54.532955] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.445 #119 NEW cov: 11233 ft: 16946 corp: 10/73b lim: 8 exec/s: 119 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:37.704 [2024-11-26 18:57:54.694514] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.704 #125 NEW cov: 11240 ft: 17013 corp: 11/81b lim: 8 exec/s: 125 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:37.704 [2024-11-26 18:57:54.859101] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.963 #126 NEW cov: 11240 ft: 17039 corp: 12/89b lim: 8 exec/s: 63 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:37.963 #126 DONE cov: 11240 ft: 17039 corp: 12/89b lim: 8 exec/s: 63 rss: 76Mb 00:07:37.963 Done 126 runs in 2 second(s) 00:07:37.963 [2024-11-26 18:57:54.982689] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:38.223 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:38.223 18:57:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:38.223 [2024-11-26 18:57:55.250912] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:38.223 [2024-11-26 18:57:55.250985] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746854 ] 00:07:38.223 [2024-11-26 18:57:55.333051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.223 [2024-11-26 18:57:55.377614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.482 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.482 INFO: Seed: 3313711453 00:07:38.482 INFO: Loaded 1 modules (386790 inline 8-bit counters): 386790 [0x2c2ca4c, 0x2c8b132), 00:07:38.482 INFO: Loaded 1 PC tables (386790 PCs): 386790 [0x2c8b138,0x3271f98), 00:07:38.482 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.482 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.482 #2 INITED exec/s: 0 rss: 67Mb 00:07:38.482 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.482 This may also happen if the target rejected all inputs we tried so far 00:07:38.482 [2024-11-26 18:57:55.621298] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:38.482 [2024-11-26 18:57:55.640530] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=304 offset=0 prot=0x3: Invalid argument 00:07:38.482 [2024-11-26 18:57:55.640558] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:07:38.482 [2024-11-26 18:57:55.640569] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:38.482 [2024-11-26 18:57:55.640587] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.001 NEW_FUNC[1/678]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:39.002 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:39.002 #27 NEW cov: 11204 ft: 10748 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 InsertByte-ChangeByte-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:39.002 [2024-11-26 18:57:56.063097] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x40000000000, 0x40000000000) fd=306 offset=0 prot=0x3: Invalid argument 00:07:39.002 [2024-11-26 18:57:56.063133] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x40000000000, 0x40000000000) offset=0 flags=0x3: Invalid argument 00:07:39.002 [2024-11-26 18:57:56.063145] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:39.002 [2024-11-26 18:57:56.063164] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.002 #28 NEW cov: 11221 ft: 13721 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:39.002 [2024-11-26 18:57:56.187408] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x40000000000, 0x40000000000) fd=306 offset=0x400000000000 prot=0x3: Invalid argument 00:07:39.002 [2024-11-26 18:57:56.187438] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x40000000000, 0x40000000000) offset=0x400000000000 flags=0x3: Invalid argument 00:07:39.002 [2024-11-26 18:57:56.187450] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:39.002 [2024-11-26 18:57:56.187468] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.261 #29 NEW cov: 11221 ft: 15401 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:39.261 [2024-11-26 18:57:56.311598] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 4702111234487484415 > max 8796093022208 00:07:39.261 [2024-11-26 18:57:56.311627] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffff0000, 0x4141414141feffff) offset=0xffff41414141 flags=0x3: No space left on device 00:07:39.261 [2024-11-26 18:57:56.311639] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:39.261 [2024-11-26 18:57:56.311657] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.261 NEW_FUNC[1/1]: 0x1c134a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:39.261 #32 NEW cov: 11238 ft: 15975 corp: 5/129b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 3 InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:39.261 [2024-11-26 18:57:56.446850] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 4702111234487484415 > max 8796093022208 00:07:39.261 [2024-11-26 18:57:56.446880] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffff0400, 0x4141414141ff03ff) offset=0xffff41414141 flags=0x3: No space left on device 00:07:39.261 [2024-11-26 18:57:56.446892] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:39.261 [2024-11-26 18:57:56.446915] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.520 #33 NEW cov: 11238 ft: 16990 corp: 6/161b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:39.520 [2024-11-26 18:57:56.572175] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 13240373178028523519 > max 8796093022208 00:07:39.520 [2024-11-26 18:57:56.572202] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffff0000, 0xb7bf414141feffff) offset=0xffff41414141 flags=0x3: No space left on device 00:07:39.520 [2024-11-26 18:57:56.572213] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:39.520 [2024-11-26 18:57:56.572231] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.520 #34 NEW cov: 11238 ft: 17149 corp: 7/193b lim: 32 exec/s: 34 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:39.520 [2024-11-26 18:57:56.697238] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 136619722014719 > max 8796093022208 00:07:39.520 [2024-11-26 18:57:56.697265] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffff0000, 0x7c4141feffff) offset=0xffff41000000 flags=0x3: No space left on device 00:07:39.520 [2024-11-26 18:57:56.697276] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:39.520 [2024-11-26 18:57:56.697294] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.780 #35 NEW cov: 11238 ft: 17473 corp: 8/225b lim: 32 exec/s: 35 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:07:39.781 [2024-11-26 18:57:56.811277] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x40000000000, 0x40000000000) fd=306 offset=0 prot=0x3: Invalid argument 00:07:39.781 [2024-11-26 18:57:56.811303] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x40000000000, 0x40000000000) offset=0 flags=0x3: Invalid argument 00:07:39.781 [2024-11-26 18:57:56.811315] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:39.781 [2024-11-26 18:57:56.811333] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.781 #36 NEW cov: 11238 ft: 17638 corp: 9/257b lim: 32 exec/s: 36 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:39.781 [2024-11-26 18:57:56.934378] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 4702111234487484415 > max 8796093022208 00:07:39.781 [2024-11-26 18:57:56.934405] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffff3000, 0x4141414141ff2fff) offset=0xffff41414141 flags=0x3: No space left on device 00:07:39.781 [2024-11-26 18:57:56.934417] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:39.781 [2024-11-26 18:57:56.934435] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.040 #37 NEW cov: 11238 ft: 17950 corp: 10/289b lim: 32 exec/s: 37 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:40.040 [2024-11-26 18:57:57.057548] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=306 offset=0 prot=0x3: Invalid argument 00:07:40.040 [2024-11-26 18:57:57.057584] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:07:40.040 [2024-11-26 18:57:57.057594] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:40.040 [2024-11-26 18:57:57.057612] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.040 #38 NEW cov: 11238 ft: 18046 corp: 11/321b lim: 32 exec/s: 38 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:40.040 [2024-11-26 18:57:57.181578] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xa100, 0xa100) fd=306 offset=0 prot=0x3: Invalid argument 00:07:40.040 [2024-11-26 18:57:57.181603] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xa100, 0xa100) offset=0 flags=0x3: Invalid argument 00:07:40.040 [2024-11-26 18:57:57.181617] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:40.040 [2024-11-26 18:57:57.181636] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.300 #39 NEW cov: 11238 ft: 18263 corp: 12/353b lim: 32 exec/s: 39 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:40.300 [2024-11-26 18:57:57.304772] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 136619722014719 > max 8796093022208 00:07:40.300 [2024-11-26 18:57:57.304797] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffff0000, 0x7c4141feffff) offset=0xffff3a000000 flags=0x3: No space left on device 00:07:40.300 [2024-11-26 18:57:57.304808] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:40.300 [2024-11-26 18:57:57.304826] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.300 #40 NEW cov: 11238 ft: 18468 corp: 13/385b lim: 32 exec/s: 40 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:40.300 [2024-11-26 18:57:57.429132] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 9223508656576790527 > max 8796093022208 00:07:40.300 [2024-11-26 18:57:57.429157] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffff0000, 0x80007c4141feffff) offset=0xffff3a000000 flags=0x3: No space left on device 00:07:40.300 [2024-11-26 18:57:57.429168] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:40.300 [2024-11-26 18:57:57.429186] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.300 #41 NEW cov: 11245 ft: 18657 corp: 14/417b lim: 32 exec/s: 41 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:40.559 [2024-11-26 18:57:57.554481] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x40000000000, 0x40000000000) fd=306 offset=0x4000000 prot=0x3: Invalid argument 00:07:40.559 [2024-11-26 18:57:57.554511] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x40000000000, 0x40000000000) offset=0x4000000 flags=0x3: Invalid argument 00:07:40.559 [2024-11-26 18:57:57.554522] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:40.559 [2024-11-26 18:57:57.554542] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.559 #42 NEW cov: 11245 ft: 18868 corp: 15/449b lim: 32 exec/s: 21 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:40.559 #42 DONE cov: 11245 ft: 18868 corp: 15/449b lim: 32 exec/s: 21 rss: 77Mb 00:07:40.559 Done 42 runs in 2 second(s) 00:07:40.559 [2024-11-26 18:57:57.649699] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:40.818 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:40.818 18:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:40.819 [2024-11-26 18:57:57.926155] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:40.819 [2024-11-26 18:57:57.926231] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747212 ] 00:07:40.819 [2024-11-26 18:57:58.008036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.078 [2024-11-26 18:57:58.052750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.078 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.078 INFO: Seed: 1694726127 00:07:41.078 INFO: Loaded 1 modules (386790 inline 8-bit counters): 386790 [0x2c2ca4c, 0x2c8b132), 00:07:41.078 INFO: Loaded 1 PC tables (386790 PCs): 386790 [0x2c8b138,0x3271f98), 00:07:41.078 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:41.078 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.078 #2 INITED exec/s: 0 rss: 67Mb 00:07:41.078 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.078 This may also happen if the target rejected all inputs we tried so far 00:07:41.336 [2024-11-26 18:57:58.297043] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:41.594 NEW_FUNC[1/677]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:41.594 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:41.594 #94 NEW cov: 11199 ft: 11135 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:41.853 #130 NEW cov: 11213 ft: 13525 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:07:42.112 NEW_FUNC[1/1]: 0x1c134a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:42.112 #141 NEW cov: 11233 ft: 14614 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CMP- DE: "\000\000"- 00:07:42.112 #147 NEW cov: 11233 ft: 14975 corp: 5/129b lim: 32 exec/s: 147 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.371 #148 NEW cov: 11233 ft: 15514 corp: 6/161b lim: 32 exec/s: 148 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.629 #149 NEW cov: 11233 ft: 16041 corp: 7/193b lim: 32 exec/s: 149 rss: 76Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:42.629 #150 NEW cov: 11233 ft: 16297 corp: 8/225b lim: 32 exec/s: 150 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.888 #151 NEW cov: 11233 ft: 16310 corp: 9/257b lim: 32 exec/s: 151 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:43.147 #152 NEW cov: 11240 ft: 16339 corp: 10/289b lim: 32 exec/s: 152 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:43.147 #153 NEW cov: 11240 ft: 16396 corp: 11/321b lim: 32 exec/s: 76 rss: 77Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:43.147 #153 DONE cov: 11240 ft: 16396 corp: 11/321b lim: 32 exec/s: 76 rss: 77Mb 00:07:43.147 ###### Recommended dictionary. ###### 00:07:43.147 "\000\000" # Uses: 1 00:07:43.147 "\001\000\000\000" # Uses: 1 00:07:43.147 ###### End of recommended dictionary. ###### 00:07:43.147 Done 153 runs in 2 second(s) 00:07:43.405 [2024-11-26 18:58:00.368685] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:43.405 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:43.664 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:43.664 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:43.664 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.664 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:43.664 18:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:43.664 [2024-11-26 18:58:00.657318] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:43.664 [2024-11-26 18:58:00.657392] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747573 ] 00:07:43.664 [2024-11-26 18:58:00.742667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.664 [2024-11-26 18:58:00.788171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.971 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.971 INFO: Seed: 142771844 00:07:43.971 INFO: Loaded 1 modules (386790 inline 8-bit counters): 386790 [0x2c2ca4c, 0x2c8b132), 00:07:43.971 INFO: Loaded 1 PC tables (386790 PCs): 386790 [0x2c8b138,0x3271f98), 00:07:43.971 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:43.971 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.971 #2 INITED exec/s: 0 rss: 67Mb 00:07:43.971 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.971 This may also happen if the target rejected all inputs we tried so far 00:07:43.971 [2024-11-26 18:58:01.039841] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:43.971 [2024-11-26 18:58:01.093550] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:43.971 [2024-11-26 18:58:01.093597] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.637 NEW_FUNC[1/678]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:44.637 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:44.637 #136 NEW cov: 11211 ft: 11083 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 4 ShuffleBytes-ChangeBinInt-InsertRepeatedBytes-CrossOver- 00:07:44.637 [2024-11-26 18:58:01.559029] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.637 [2024-11-26 18:58:01.559079] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.637 #142 NEW cov: 11225 ft: 13435 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:44.637 [2024-11-26 18:58:01.728162] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.637 [2024-11-26 18:58:01.728197] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.902 NEW_FUNC[1/1]: 0x1c134a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:44.902 #144 NEW cov: 11242 ft: 13820 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 2 EraseBytes-CopyPart- 00:07:44.902 [2024-11-26 18:58:01.896593] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.902 [2024-11-26 18:58:01.896626] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.902 #145 NEW cov: 11242 ft: 13955 corp: 5/53b lim: 13 exec/s: 0 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:44.902 [2024-11-26 18:58:02.058915] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.902 [2024-11-26 18:58:02.058948] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.160 #146 NEW cov: 11242 ft: 14279 corp: 6/66b lim: 13 exec/s: 146 rss: 77Mb L: 13/13 MS: 1 CrossOver- 00:07:45.160 [2024-11-26 18:58:02.227246] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.160 [2024-11-26 18:58:02.227277] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.160 #147 NEW cov: 11242 ft: 15428 corp: 7/79b lim: 13 exec/s: 147 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:07:45.419 [2024-11-26 18:58:02.389220] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.419 [2024-11-26 18:58:02.389252] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.419 #148 NEW cov: 11242 ft: 15964 corp: 8/92b lim: 13 exec/s: 148 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:07:45.419 [2024-11-26 18:58:02.548042] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.420 [2024-11-26 18:58:02.548073] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.678 #149 NEW cov: 11242 ft: 16678 corp: 9/105b lim: 13 exec/s: 149 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:07:45.678 [2024-11-26 18:58:02.708016] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.678 [2024-11-26 18:58:02.708100] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.678 #155 NEW cov: 11249 ft: 16853 corp: 10/118b lim: 13 exec/s: 155 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:07:45.678 [2024-11-26 18:58:02.868786] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.678 [2024-11-26 18:58:02.868818] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.937 #156 NEW cov: 11249 ft: 16904 corp: 11/131b lim: 13 exec/s: 156 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:45.937 [2024-11-26 18:58:03.033864] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.937 [2024-11-26 18:58:03.033896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.937 #157 NEW cov: 11249 ft: 16931 corp: 12/144b lim: 13 exec/s: 78 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:07:45.937 #157 DONE cov: 11249 ft: 16931 corp: 12/144b lim: 13 exec/s: 78 rss: 77Mb 00:07:45.937 Done 157 runs in 2 second(s) 00:07:45.937 [2024-11-26 18:58:03.147671] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:46.195 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:46.195 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.195 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.195 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:46.195 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:46.195 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:46.195 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:46.196 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:46.196 18:58:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:46.454 [2024-11-26 18:58:03.424558] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:46.454 [2024-11-26 18:58:03.424632] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747944 ] 00:07:46.454 [2024-11-26 18:58:03.505575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.454 [2024-11-26 18:58:03.550347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.713 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.713 INFO: Seed: 2892771724 00:07:46.713 INFO: Loaded 1 modules (386790 inline 8-bit counters): 386790 [0x2c2ca4c, 0x2c8b132), 00:07:46.713 INFO: Loaded 1 PC tables (386790 PCs): 386790 [0x2c8b138,0x3271f98), 00:07:46.713 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.713 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.713 #2 INITED exec/s: 0 rss: 68Mb 00:07:46.713 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.713 This may also happen if the target rejected all inputs we tried so far 00:07:46.713 [2024-11-26 18:58:03.790998] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:46.713 [2024-11-26 18:58:03.834526] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.713 [2024-11-26 18:58:03.834563] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.230 NEW_FUNC[1/678]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:47.230 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:47.230 #6 NEW cov: 11200 ft: 11160 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 InsertRepeatedBytes-ChangeByte-ChangeBinInt-InsertByte- 00:07:47.230 [2024-11-26 18:58:04.299246] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.230 [2024-11-26 18:58:04.299290] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.230 #12 NEW cov: 11217 ft: 13862 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:47.488 [2024-11-26 18:58:04.476233] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.488 [2024-11-26 18:58:04.476267] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.488 NEW_FUNC[1/1]: 0x1c134a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:47.488 #14 NEW cov: 11234 ft: 14490 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 2 ChangeByte-CrossOver- 00:07:47.488 [2024-11-26 18:58:04.669646] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.488 [2024-11-26 18:58:04.669679] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.747 #15 NEW cov: 11234 ft: 14999 corp: 5/37b lim: 9 exec/s: 15 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:07:47.747 [2024-11-26 18:58:04.846180] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.747 [2024-11-26 18:58:04.846213] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.747 #16 NEW cov: 11234 ft: 15585 corp: 6/46b lim: 9 exec/s: 16 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:07:48.006 [2024-11-26 18:58:05.024399] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.006 [2024-11-26 18:58:05.024430] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.006 #17 NEW cov: 11234 ft: 15934 corp: 7/55b lim: 9 exec/s: 17 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:07:48.006 [2024-11-26 18:58:05.207017] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.006 [2024-11-26 18:58:05.207050] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.265 #18 NEW cov: 11234 ft: 16523 corp: 8/64b lim: 9 exec/s: 18 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:07:48.265 [2024-11-26 18:58:05.387212] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.265 [2024-11-26 18:58:05.387242] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.524 #19 NEW cov: 11234 ft: 16570 corp: 9/73b lim: 9 exec/s: 19 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:07:48.524 [2024-11-26 18:58:05.562333] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.524 [2024-11-26 18:58:05.562365] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.524 #20 NEW cov: 11241 ft: 16643 corp: 10/82b lim: 9 exec/s: 20 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:48.782 [2024-11-26 18:58:05.741193] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.782 [2024-11-26 18:58:05.741226] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.782 #21 NEW cov: 11241 ft: 16767 corp: 11/91b lim: 9 exec/s: 10 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:07:48.782 #21 DONE cov: 11241 ft: 16767 corp: 11/91b lim: 9 exec/s: 10 rss: 77Mb 00:07:48.782 Done 21 runs in 2 second(s) 00:07:48.782 [2024-11-26 18:58:05.870680] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:49.041 18:58:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:49.041 18:58:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.041 18:58:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.041 18:58:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:49.041 00:07:49.041 real 0m19.427s 00:07:49.041 user 0m26.795s 00:07:49.041 sys 0m1.901s 00:07:49.041 18:58:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.041 18:58:06 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:49.041 ************************************ 00:07:49.041 END TEST vfio_llvm_fuzz 00:07:49.041 ************************************ 00:07:49.041 00:07:49.041 real 1m23.181s 00:07:49.041 user 2m6.809s 00:07:49.041 sys 0m9.221s 00:07:49.041 18:58:06 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.041 18:58:06 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:49.041 ************************************ 00:07:49.041 END TEST llvm_fuzz 00:07:49.041 ************************************ 00:07:49.041 18:58:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:07:49.041 18:58:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:07:49.041 18:58:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:07:49.041 18:58:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.041 18:58:06 -- common/autotest_common.sh@10 -- # set +x 00:07:49.041 18:58:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:07:49.041 18:58:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:07:49.041 18:58:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:07:49.041 18:58:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.235 INFO: APP EXITING 00:07:53.235 INFO: killing all VMs 00:07:53.235 INFO: killing vhost app 00:07:53.235 INFO: EXIT DONE 00:07:55.769 Waiting for block devices as requested 00:07:55.769 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:07:55.769 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:56.028 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:56.028 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:56.028 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:56.287 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:56.287 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:56.287 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:56.287 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:56.546 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:56.546 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:56.546 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:56.805 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:56.805 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:56.805 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:57.064 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:57.064 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:00.352 Cleaning 00:08:00.352 Removing: /dev/shm/spdk_tgt_trace.pid2725850 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2723511 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2724636 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2725850 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2726303 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2727113 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2727136 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2727973 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2728057 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2728403 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2728641 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2728875 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2729132 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2729367 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2729563 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2729757 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2729992 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2730575 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2733016 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2733292 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2733499 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2733507 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2733941 00:08:00.352 Removing: /var/run/dpdk/spdk_pid2734072 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2734463 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2734471 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2734680 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2734844 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2734972 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2735057 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2735515 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2735682 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2735836 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2735986 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2736560 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2736917 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2737271 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2737628 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2737892 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2738192 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2738545 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2738979 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2739380 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2740117 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2740486 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2740842 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2741189 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2741450 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2741752 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2742114 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2742467 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2742828 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2743183 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2743542 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2743896 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2744255 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2744596 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2744876 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2745176 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2745775 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2746135 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2746492 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2746854 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2747212 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2747573 00:08:00.353 Removing: /var/run/dpdk/spdk_pid2747944 00:08:00.353 Clean 00:08:00.353 18:58:17 -- common/autotest_common.sh@1453 -- # return 0 00:08:00.353 18:58:17 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:00.353 18:58:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.353 18:58:17 -- common/autotest_common.sh@10 -- # set +x 00:08:00.353 18:58:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:00.353 18:58:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.353 18:58:17 -- common/autotest_common.sh@10 -- # set +x 00:08:00.353 18:58:17 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:00.353 18:58:17 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:00.353 18:58:17 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:00.353 18:58:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:00.353 18:58:17 -- spdk/autotest.sh@398 -- # hostname 00:08:00.353 18:58:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-49 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:00.611 geninfo: WARNING: invalid characters removed from testname! 00:08:05.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:10.082 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:13.378 18:58:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:21.502 18:58:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:26.776 18:58:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:32.074 18:58:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:37.544 18:58:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:42.817 18:58:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:48.093 18:59:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:48.093 18:59:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:08:48.093 18:59:04 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:08:48.093 18:59:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:48.093 18:59:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:48.093 18:59:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:48.093 + [[ -n 2627073 ]] 00:08:48.093 + sudo kill 2627073 00:08:48.103 [Pipeline] } 00:08:48.121 [Pipeline] // stage 00:08:48.127 [Pipeline] } 00:08:48.143 [Pipeline] // timeout 00:08:48.148 [Pipeline] } 00:08:48.164 [Pipeline] // catchError 00:08:48.171 [Pipeline] } 00:08:48.189 [Pipeline] // wrap 00:08:48.197 [Pipeline] } 00:08:48.211 [Pipeline] // catchError 00:08:48.219 [Pipeline] stage 00:08:48.221 [Pipeline] { (Epilogue) 00:08:48.232 [Pipeline] catchError 00:08:48.233 [Pipeline] { 00:08:48.245 [Pipeline] echo 00:08:48.246 Cleanup processes 00:08:48.250 [Pipeline] sh 00:08:48.528 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:48.528 2753880 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:48.542 [Pipeline] sh 00:08:48.825 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:48.825 ++ grep -v 'sudo pgrep' 00:08:48.825 ++ awk '{print $1}' 00:08:48.825 + sudo kill -9 00:08:48.825 + true 00:08:48.839 [Pipeline] sh 00:08:49.121 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:01.351 [Pipeline] sh 00:09:01.638 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:01.638 Artifacts sizes are good 00:09:01.653 [Pipeline] archiveArtifacts 00:09:01.661 Archiving artifacts 00:09:01.810 [Pipeline] sh 00:09:02.095 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:02.109 [Pipeline] cleanWs 00:09:02.118 [WS-CLEANUP] Deleting project workspace... 00:09:02.118 [WS-CLEANUP] Deferred wipeout is used... 00:09:02.126 [WS-CLEANUP] done 00:09:02.128 [Pipeline] } 00:09:02.143 [Pipeline] // catchError 00:09:02.155 [Pipeline] sh 00:09:02.453 + logger -p user.info -t JENKINS-CI 00:09:02.463 [Pipeline] } 00:09:02.477 [Pipeline] // stage 00:09:02.482 [Pipeline] } 00:09:02.496 [Pipeline] // node 00:09:02.501 [Pipeline] End of Pipeline 00:09:02.538 Finished: SUCCESS