00:00:00.001 Started by upstream project "autotest-per-patch" build number 127184 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.014 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.015 The recommended git tool is: git 00:00:00.015 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.035 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.081 Using shallow fetch with depth 1 00:00:00.081 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.081 > git --version # timeout=10 00:00:00.140 > git --version # 'git version 2.39.2' 00:00:00.140 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.190 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.190 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.379 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.391 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.405 Checking out Revision 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b (FETCH_HEAD) 00:00:02.405 > git config core.sparsecheckout # timeout=10 00:00:02.419 > git read-tree -mu HEAD # timeout=10 00:00:02.435 > git checkout -f 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=5 00:00:02.458 Commit message: "jjb/jobs: add SPDK_TEST_SETUP flag into configuration" 00:00:02.458 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:02.578 [Pipeline] Start of Pipeline 00:00:02.599 [Pipeline] library 00:00:02.602 Loading library shm_lib@master 00:00:02.602 Library shm_lib@master is cached. Copying from home. 00:00:02.620 [Pipeline] node 00:00:02.646 Running on WFP13 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.648 [Pipeline] { 00:00:02.661 [Pipeline] catchError 00:00:02.663 [Pipeline] { 00:00:02.679 [Pipeline] wrap 00:00:02.689 [Pipeline] { 00:00:02.697 [Pipeline] stage 00:00:02.698 [Pipeline] { (Prologue) 00:00:02.867 [Pipeline] sh 00:00:03.696 + logger -p user.info -t JENKINS-CI 00:00:03.714 [Pipeline] echo 00:00:03.715 Node: WFP13 00:00:03.724 [Pipeline] sh 00:00:04.055 [Pipeline] setCustomBuildProperty 00:00:04.065 [Pipeline] echo 00:00:04.067 Cleanup processes 00:00:04.073 [Pipeline] sh 00:00:04.404 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.405 29077 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.419 [Pipeline] sh 00:00:04.708 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.708 ++ grep -v 'sudo pgrep' 00:00:04.708 ++ awk '{print $1}' 00:00:04.708 + sudo kill -9 00:00:04.708 + true 00:00:04.761 [Pipeline] cleanWs 00:00:04.773 [WS-CLEANUP] Deleting project workspace... 00:00:04.773 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.785 [WS-CLEANUP] done 00:00:04.791 [Pipeline] setCustomBuildProperty 00:00:04.809 [Pipeline] sh 00:00:05.097 + sudo git config --global --replace-all safe.directory '*' 00:00:05.187 [Pipeline] httpRequest 00:00:06.554 [Pipeline] echo 00:00:06.556 Sorcerer 10.211.164.101 is alive 00:00:06.565 [Pipeline] httpRequest 00:00:06.570 HttpMethod: GET 00:00:06.570 URL: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:06.571 Sending request to url: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:06.579 Response Code: HTTP/1.1 200 OK 00:00:06.580 Success: Status code 200 is in the accepted range: 200,404 00:00:06.580 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:15.205 [Pipeline] sh 00:00:15.494 + tar --no-same-owner -xf jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:15.512 [Pipeline] httpRequest 00:00:15.534 [Pipeline] echo 00:00:15.536 Sorcerer 10.211.164.101 is alive 00:00:15.545 [Pipeline] httpRequest 00:00:15.550 HttpMethod: GET 00:00:15.551 URL: http://10.211.164.101/packages/spdk_5efb3b7d94fb0ee18bb64f4fe0f3a4cc18ccc93d.tar.gz 00:00:15.552 Sending request to url: http://10.211.164.101/packages/spdk_5efb3b7d94fb0ee18bb64f4fe0f3a4cc18ccc93d.tar.gz 00:00:15.560 Response Code: HTTP/1.1 200 OK 00:00:15.561 Success: Status code 200 is in the accepted range: 200,404 00:00:15.562 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_5efb3b7d94fb0ee18bb64f4fe0f3a4cc18ccc93d.tar.gz 00:01:49.668 [Pipeline] sh 00:01:49.960 + tar --no-same-owner -xf spdk_5efb3b7d94fb0ee18bb64f4fe0f3a4cc18ccc93d.tar.gz 00:01:52.522 [Pipeline] sh 00:01:52.813 + git -C spdk log --oneline -n5 00:01:52.813 5efb3b7d9 raid0: DIF/DIX implementation and tests for RAID0 00:01:52.813 208b98e37 raid: Generic changes to support DIF/DIX for RAID 00:01:52.813 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:52.813 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:52.813 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:52.832 [Pipeline] sh 00:01:53.122 + ip --json address 00:01:53.139 [Pipeline] readJSON 00:01:53.161 [Pipeline] sh 00:01:53.452 + sudo ip link set dev eth5 up 00:01:53.713 + sudo ip address add 192.168.10.10/24 dev eth5 00:01:53.731 [Pipeline] withCredentials 00:01:53.749 Masking supported pattern matches of $beetle_key 00:01:53.751 [Pipeline] { 00:01:53.762 [Pipeline] sh 00:01:54.389 + ssh -i **** -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectionAttempts=5 root@192.168.10.11 'for gpio in {0..10}; do Beetle --SetGpio "$gpio" HIGH; done' 00:01:57.686 Warning: Permanently added '192.168.10.11' (ED25519) to the list of known hosts. 00:02:00.236 [Pipeline] } 00:02:00.260 [Pipeline] // withCredentials 00:02:00.266 [Pipeline] } 00:02:00.282 [Pipeline] // stage 00:02:00.291 [Pipeline] stage 00:02:00.294 [Pipeline] { (Prepare) 00:02:00.308 [Pipeline] writeFile 00:02:00.322 [Pipeline] sh 00:02:00.608 + logger -p user.info -t JENKINS-CI 00:02:00.621 [Pipeline] sh 00:02:00.908 + logger -p user.info -t JENKINS-CI 00:02:00.921 [Pipeline] sh 00:02:01.207 + cat autorun-spdk.conf 00:02:01.207 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.207 SPDK_TEST_FUZZER_SHORT=1 00:02:01.207 SPDK_TEST_FUZZER=1 00:02:01.207 SPDK_TEST_SETUP=1 00:02:01.207 SPDK_RUN_UBSAN=1 00:02:01.215 RUN_NIGHTLY=0 00:02:01.220 [Pipeline] readFile 00:02:01.264 [Pipeline] withEnv 00:02:01.266 [Pipeline] { 00:02:01.281 [Pipeline] sh 00:02:01.568 + set -ex 00:02:01.569 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:02:01.569 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:01.569 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.569 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:01.569 ++ SPDK_TEST_FUZZER=1 00:02:01.569 ++ SPDK_TEST_SETUP=1 00:02:01.569 ++ SPDK_RUN_UBSAN=1 00:02:01.569 ++ RUN_NIGHTLY=0 00:02:01.569 + case $SPDK_TEST_NVMF_NICS in 00:02:01.569 + DRIVERS= 00:02:01.569 + [[ -n '' ]] 00:02:01.569 + exit 0 00:02:01.578 [Pipeline] } 00:02:01.598 [Pipeline] // withEnv 00:02:01.604 [Pipeline] } 00:02:01.622 [Pipeline] // stage 00:02:01.632 [Pipeline] catchError 00:02:01.633 [Pipeline] { 00:02:01.649 [Pipeline] timeout 00:02:01.649 Timeout set to expire in 30 min 00:02:01.651 [Pipeline] { 00:02:01.667 [Pipeline] stage 00:02:01.670 [Pipeline] { (Tests) 00:02:01.685 [Pipeline] sh 00:02:01.975 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:01.975 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:01.975 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:02:01.975 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:02:01.976 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:01.976 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:01.976 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:02:01.976 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:01.976 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:01.976 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:01.976 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:02:01.976 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:01.976 + source /etc/os-release 00:02:01.976 ++ NAME='Fedora Linux' 00:02:01.976 ++ VERSION='38 (Cloud Edition)' 00:02:01.976 ++ ID=fedora 00:02:01.976 ++ VERSION_ID=38 00:02:01.976 ++ VERSION_CODENAME= 00:02:01.976 ++ PLATFORM_ID=platform:f38 00:02:01.976 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:01.976 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.976 ++ LOGO=fedora-logo-icon 00:02:01.976 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:01.976 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.976 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:01.976 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.976 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.976 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.976 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:01.976 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.976 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:01.976 ++ SUPPORT_END=2024-05-14 00:02:01.976 ++ VARIANT='Cloud Edition' 00:02:01.976 ++ VARIANT_ID=cloud 00:02:01.976 + uname -a 00:02:01.976 Linux spdk-wfp-13 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:01.976 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:05.271 Hugepages 00:02:05.271 node hugesize free / total 00:02:05.271 node0 1048576kB 0 / 0 00:02:05.271 node0 2048kB 0 / 0 00:02:05.271 node1 1048576kB 0 / 0 00:02:05.271 node1 2048kB 0 / 0 00:02:05.271 00:02:05.271 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.271 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:05.271 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:05.271 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme0 nvme0n1 00:02:05.271 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:02:05.271 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:02:05.271 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme3 nvme3n1 00:02:05.271 + rm -f /tmp/spdk-ld-path 00:02:05.271 + source autorun-spdk.conf 00:02:05.271 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.271 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:05.271 ++ SPDK_TEST_FUZZER=1 00:02:05.271 ++ SPDK_TEST_SETUP=1 00:02:05.271 ++ SPDK_RUN_UBSAN=1 00:02:05.271 ++ RUN_NIGHTLY=0 00:02:05.271 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.271 + [[ -n '' ]] 00:02:05.271 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:05.271 + for M in /var/spdk/build-*-manifest.txt 00:02:05.271 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.271 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:05.271 + for M in /var/spdk/build-*-manifest.txt 00:02:05.271 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.271 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:05.271 ++ uname 00:02:05.271 + [[ Linux == \L\i\n\u\x ]] 00:02:05.271 + sudo dmesg -T 00:02:05.271 + sudo dmesg --clear 00:02:05.271 + dmesg_pid=30165 00:02:05.271 + [[ Fedora Linux == FreeBSD ]] 00:02:05.271 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.271 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.271 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.271 + sudo dmesg -Tw 00:02:05.271 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.271 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.271 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.271 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.271 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.271 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.271 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.271 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.271 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.271 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.271 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.271 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:05.271 Test configuration: 00:02:05.271 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.271 SPDK_TEST_FUZZER_SHORT=1 00:02:05.271 SPDK_TEST_FUZZER=1 00:02:05.271 SPDK_TEST_SETUP=1 00:02:05.271 SPDK_RUN_UBSAN=1 00:02:05.272 RUN_NIGHTLY=0 15:50:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:05.272 15:50:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.272 15:50:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.272 15:50:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.272 15:50:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.272 15:50:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.272 15:50:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.272 15:50:23 -- paths/export.sh@5 -- $ export PATH 00:02:05.272 15:50:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.272 15:50:23 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:05.272 15:50:23 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:05.272 15:50:23 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721915423.XXXXXX 00:02:05.272 15:50:23 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721915423.py7PV6 00:02:05.272 15:50:23 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:05.272 15:50:23 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:05.272 15:50:23 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:02:05.272 15:50:23 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:05.272 15:50:23 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.272 15:50:23 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:05.272 15:50:23 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:05.272 15:50:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.272 15:50:23 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:05.272 15:50:23 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:05.272 15:50:23 -- pm/common@17 -- $ local monitor 00:02:05.272 15:50:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.272 15:50:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.272 15:50:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.272 15:50:23 -- pm/common@21 -- $ date +%s 00:02:05.272 15:50:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.272 15:50:23 -- pm/common@21 -- $ date +%s 00:02:05.272 15:50:23 -- pm/common@25 -- $ sleep 1 00:02:05.272 15:50:23 -- pm/common@21 -- $ date +%s 00:02:05.272 15:50:23 -- pm/common@21 -- $ date +%s 00:02:05.272 15:50:23 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721915423 00:02:05.272 15:50:23 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721915423 00:02:05.272 15:50:23 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721915423 00:02:05.272 15:50:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721915423 00:02:05.272 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721915423_collect-vmstat.pm.log 00:02:05.272 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721915423_collect-cpu-load.pm.log 00:02:05.272 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721915423_collect-cpu-temp.pm.log 00:02:05.272 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721915423_collect-bmc-pm.bmc.pm.log 00:02:06.225 15:50:24 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:06.225 15:50:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.225 15:50:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.225 15:50:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:06.225 15:50:24 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.225 Thu Jul 25 01:50:24 PM UTC 2024 00:02:06.225 15:50:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.225 v24.09-pre-323-g5efb3b7d9 00:02:06.226 15:50:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:06.226 15:50:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.226 15:50:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.226 15:50:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:06.226 15:50:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.226 15:50:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.485 ************************************ 00:02:06.485 START TEST ubsan 00:02:06.485 ************************************ 00:02:06.485 15:50:24 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:06.485 using ubsan 00:02:06.485 00:02:06.485 real 0m0.000s 00:02:06.485 user 0m0.000s 00:02:06.485 sys 0m0.000s 00:02:06.485 15:50:24 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:06.485 15:50:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.485 ************************************ 00:02:06.485 END TEST ubsan 00:02:06.485 ************************************ 00:02:06.485 15:50:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:06.485 15:50:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:06.485 15:50:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:06.485 15:50:24 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:02:06.485 15:50:24 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:02:06.485 15:50:24 -- common/autobuild_common.sh@435 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:02:06.485 15:50:24 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:06.485 15:50:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.485 15:50:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.485 ************************************ 00:02:06.485 START TEST autobuild_llvm_precompile 00:02:06.485 ************************************ 00:02:06.485 15:50:24 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:02:06.485 15:50:24 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:02:08.393 Target: x86_64-redhat-linux-gnu 00:02:08.393 Thread model: posix 00:02:08.393 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:02:08.393 15:50:26 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:08.963 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:08.963 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:09.903 Using 'verbs' RDMA provider 00:02:23.509 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:38.406 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:38.406 Creating mk/config.mk...done. 00:02:38.406 Creating mk/cc.flags.mk...done. 00:02:38.406 Type 'make' to build. 00:02:38.406 00:02:38.406 real 0m29.825s 00:02:38.406 user 0m11.973s 00:02:38.406 sys 0m16.728s 00:02:38.406 15:50:54 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:38.406 15:50:54 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:38.406 ************************************ 00:02:38.406 END TEST autobuild_llvm_precompile 00:02:38.406 ************************************ 00:02:38.406 15:50:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:38.406 15:50:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:38.406 15:50:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:38.406 15:50:54 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:38.406 15:50:54 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:38.406 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:38.406 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:38.406 Using 'verbs' RDMA provider 00:02:48.393 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:58.387 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:58.956 Creating mk/config.mk...done. 00:02:58.956 Creating mk/cc.flags.mk...done. 00:02:58.956 Type 'make' to build. 00:02:58.956 15:51:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j88 00:02:58.957 15:51:16 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:58.957 15:51:16 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:58.957 15:51:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.957 ************************************ 00:02:58.957 START TEST make 00:02:58.957 ************************************ 00:02:58.957 15:51:16 make -- common/autotest_common.sh@1125 -- $ make -j88 00:02:59.216 make[1]: Nothing to be done for 'all'. 00:03:01.768 The Meson build system 00:03:01.768 Version: 1.3.1 00:03:01.768 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:03:01.768 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:01.768 Build type: native build 00:03:01.768 Project name: libvfio-user 00:03:01.768 Project version: 0.0.1 00:03:01.768 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:01.768 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:01.768 Host machine cpu family: x86_64 00:03:01.768 Host machine cpu: x86_64 00:03:01.768 Run-time dependency threads found: YES 00:03:01.768 Library dl found: YES 00:03:01.768 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:01.768 Run-time dependency json-c found: YES 0.17 00:03:01.768 Run-time dependency cmocka found: YES 1.1.7 00:03:01.768 Program pytest-3 found: NO 00:03:01.768 Program flake8 found: NO 00:03:01.768 Program misspell-fixer found: NO 00:03:01.768 Program restructuredtext-lint found: NO 00:03:01.768 Program valgrind found: YES (/usr/bin/valgrind) 00:03:01.768 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.768 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.768 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.768 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:01.768 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:01.768 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:01.768 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:01.768 Build targets in project: 8 00:03:01.768 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:01.768 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:01.768 00:03:01.768 libvfio-user 0.0.1 00:03:01.768 00:03:01.768 User defined options 00:03:01.768 buildtype : debug 00:03:01.768 default_library: static 00:03:01.768 libdir : /usr/local/lib 00:03:01.768 00:03:01.768 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.768 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:01.768 [1/36] Compiling C object samples/null.p/null.c.o 00:03:01.768 [2/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:01.768 [3/36] Compiling C object samples/lspci.p/lspci.c.o 00:03:01.768 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:03:01.768 [5/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:01.768 [6/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:01.768 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:03:01.768 [8/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:01.768 [9/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:03:01.768 [10/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:01.768 [11/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:03:01.768 [12/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:01.768 [13/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:01.768 [14/36] Compiling C object test/unit_tests.p/mocks.c.o 00:03:01.768 [15/36] Compiling C object samples/server.p/server.c.o 00:03:01.768 [16/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:01.768 [17/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:01.768 [18/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:01.768 [19/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:01.768 [20/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:03:01.768 [21/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:03:01.768 [22/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:01.768 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:01.768 [24/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:01.768 [25/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:03:02.028 [26/36] Compiling C object samples/client.p/client.c.o 00:03:02.028 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:03:02.028 [28/36] Linking target samples/client 00:03:02.028 [29/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:02.028 [30/36] Linking static target lib/libvfio-user.a 00:03:02.028 [31/36] Linking target test/unit_tests 00:03:02.028 [32/36] Linking target samples/server 00:03:02.028 [33/36] Linking target samples/gpio-pci-idio-16 00:03:02.028 [34/36] Linking target samples/shadow_ioeventfd_server 00:03:02.028 [35/36] Linking target samples/lspci 00:03:02.028 [36/36] Linking target samples/null 00:03:02.028 INFO: autodetecting backend as ninja 00:03:02.028 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.028 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.595 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:02.595 ninja: no work to do. 00:03:07.878 The Meson build system 00:03:07.878 Version: 1.3.1 00:03:07.878 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:03:07.878 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:03:07.878 Build type: native build 00:03:07.878 Program cat found: YES (/usr/bin/cat) 00:03:07.878 Project name: DPDK 00:03:07.878 Project version: 24.03.0 00:03:07.878 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:07.878 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:07.878 Host machine cpu family: x86_64 00:03:07.878 Host machine cpu: x86_64 00:03:07.878 Message: ## Building in Developer Mode ## 00:03:07.878 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:07.878 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:07.878 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:07.878 Program python3 found: YES (/usr/bin/python3) 00:03:07.878 Program cat found: YES (/usr/bin/cat) 00:03:07.878 Compiler for C supports arguments -march=native: YES 00:03:07.878 Checking for size of "void *" : 8 00:03:07.878 Checking for size of "void *" : 8 (cached) 00:03:07.878 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:07.878 Library m found: YES 00:03:07.878 Library numa found: YES 00:03:07.878 Has header "numaif.h" : YES 00:03:07.878 Library fdt found: NO 00:03:07.878 Library execinfo found: NO 00:03:07.878 Has header "execinfo.h" : YES 00:03:07.878 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:07.878 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:07.878 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:07.878 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:07.878 Run-time dependency openssl found: YES 3.0.9 00:03:07.878 Run-time dependency libpcap found: YES 1.10.4 00:03:07.878 Has header "pcap.h" with dependency libpcap: YES 00:03:07.878 Compiler for C supports arguments -Wcast-qual: YES 00:03:07.878 Compiler for C supports arguments -Wdeprecated: YES 00:03:07.878 Compiler for C supports arguments -Wformat: YES 00:03:07.878 Compiler for C supports arguments -Wformat-nonliteral: YES 00:03:07.878 Compiler for C supports arguments -Wformat-security: YES 00:03:07.878 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.878 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:07.878 Compiler for C supports arguments -Wnested-externs: YES 00:03:07.878 Compiler for C supports arguments -Wold-style-definition: YES 00:03:07.879 Compiler for C supports arguments -Wpointer-arith: YES 00:03:07.879 Compiler for C supports arguments -Wsign-compare: YES 00:03:07.879 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:07.879 Compiler for C supports arguments -Wundef: YES 00:03:07.879 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.879 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:07.879 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:03:07.879 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.879 Program objdump found: YES (/usr/bin/objdump) 00:03:07.879 Compiler for C supports arguments -mavx512f: YES 00:03:07.879 Checking if "AVX512 checking" compiles: YES 00:03:07.879 Fetching value of define "__SSE4_2__" : 1 00:03:07.879 Fetching value of define "__AES__" : 1 00:03:07.879 Fetching value of define "__AVX__" : 1 00:03:07.879 Fetching value of define "__AVX2__" : 1 00:03:07.879 Fetching value of define "__AVX512BW__" : 1 00:03:07.879 Fetching value of define "__AVX512CD__" : 1 00:03:07.879 Fetching value of define "__AVX512DQ__" : 1 00:03:07.879 Fetching value of define "__AVX512F__" : 1 00:03:07.879 Fetching value of define "__AVX512VL__" : 1 00:03:07.879 Fetching value of define "__PCLMUL__" : 1 00:03:07.879 Fetching value of define "__RDRND__" : 1 00:03:07.879 Fetching value of define "__RDSEED__" : 1 00:03:07.879 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:07.879 Fetching value of define "__znver1__" : (undefined) 00:03:07.879 Fetching value of define "__znver2__" : (undefined) 00:03:07.879 Fetching value of define "__znver3__" : (undefined) 00:03:07.879 Fetching value of define "__znver4__" : (undefined) 00:03:07.879 Compiler for C supports arguments -Wno-format-truncation: NO 00:03:07.879 Message: lib/log: Defining dependency "log" 00:03:07.879 Message: lib/kvargs: Defining dependency "kvargs" 00:03:07.879 Message: lib/telemetry: Defining dependency "telemetry" 00:03:07.879 Checking for function "getentropy" : NO 00:03:07.879 Message: lib/eal: Defining dependency "eal" 00:03:07.879 Message: lib/ring: Defining dependency "ring" 00:03:07.879 Message: lib/rcu: Defining dependency "rcu" 00:03:07.879 Message: lib/mempool: Defining dependency "mempool" 00:03:07.879 Message: lib/mbuf: Defining dependency "mbuf" 00:03:07.879 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:07.879 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:07.879 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:07.879 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:07.879 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:07.879 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:07.879 Compiler for C supports arguments -mpclmul: YES 00:03:07.879 Compiler for C supports arguments -maes: YES 00:03:07.879 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:07.879 Compiler for C supports arguments -mavx512bw: YES 00:03:07.879 Compiler for C supports arguments -mavx512dq: YES 00:03:07.879 Compiler for C supports arguments -mavx512vl: YES 00:03:07.879 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:07.879 Compiler for C supports arguments -mavx2: YES 00:03:07.879 Compiler for C supports arguments -mavx: YES 00:03:07.879 Message: lib/net: Defining dependency "net" 00:03:07.879 Message: lib/meter: Defining dependency "meter" 00:03:07.879 Message: lib/ethdev: Defining dependency "ethdev" 00:03:07.879 Message: lib/pci: Defining dependency "pci" 00:03:07.879 Message: lib/cmdline: Defining dependency "cmdline" 00:03:07.879 Message: lib/hash: Defining dependency "hash" 00:03:07.879 Message: lib/timer: Defining dependency "timer" 00:03:07.879 Message: lib/compressdev: Defining dependency "compressdev" 00:03:07.879 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:07.879 Message: lib/dmadev: Defining dependency "dmadev" 00:03:07.879 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:07.879 Message: lib/power: Defining dependency "power" 00:03:07.879 Message: lib/reorder: Defining dependency "reorder" 00:03:07.879 Message: lib/security: Defining dependency "security" 00:03:07.879 Has header "linux/userfaultfd.h" : YES 00:03:07.879 Has header "linux/vduse.h" : YES 00:03:07.879 Message: lib/vhost: Defining dependency "vhost" 00:03:07.879 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:03:07.879 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:07.879 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:07.879 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:07.879 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:07.879 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:07.879 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:07.879 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:07.879 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:07.879 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:07.879 Program doxygen found: YES (/usr/bin/doxygen) 00:03:07.879 Configuring doxy-api-html.conf using configuration 00:03:07.879 Configuring doxy-api-man.conf using configuration 00:03:07.879 Program mandb found: YES (/usr/bin/mandb) 00:03:07.879 Program sphinx-build found: NO 00:03:07.879 Configuring rte_build_config.h using configuration 00:03:07.879 Message: 00:03:07.879 ================= 00:03:07.879 Applications Enabled 00:03:07.879 ================= 00:03:07.879 00:03:07.879 apps: 00:03:07.879 00:03:07.879 00:03:07.879 Message: 00:03:07.879 ================= 00:03:07.879 Libraries Enabled 00:03:07.879 ================= 00:03:07.879 00:03:07.879 libs: 00:03:07.879 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:07.879 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:07.879 cryptodev, dmadev, power, reorder, security, vhost, 00:03:07.879 00:03:07.879 Message: 00:03:07.879 =============== 00:03:07.879 Drivers Enabled 00:03:07.879 =============== 00:03:07.879 00:03:07.879 common: 00:03:07.879 00:03:07.879 bus: 00:03:07.879 pci, vdev, 00:03:07.879 mempool: 00:03:07.879 ring, 00:03:07.879 dma: 00:03:07.879 00:03:07.879 net: 00:03:07.879 00:03:07.879 crypto: 00:03:07.879 00:03:07.879 compress: 00:03:07.879 00:03:07.879 vdpa: 00:03:07.879 00:03:07.879 00:03:07.879 Message: 00:03:07.879 ================= 00:03:07.879 Content Skipped 00:03:07.879 ================= 00:03:07.879 00:03:07.879 apps: 00:03:07.879 dumpcap: explicitly disabled via build config 00:03:07.879 graph: explicitly disabled via build config 00:03:07.879 pdump: explicitly disabled via build config 00:03:07.879 proc-info: explicitly disabled via build config 00:03:07.879 test-acl: explicitly disabled via build config 00:03:07.879 test-bbdev: explicitly disabled via build config 00:03:07.879 test-cmdline: explicitly disabled via build config 00:03:07.879 test-compress-perf: explicitly disabled via build config 00:03:07.879 test-crypto-perf: explicitly disabled via build config 00:03:07.879 test-dma-perf: explicitly disabled via build config 00:03:07.879 test-eventdev: explicitly disabled via build config 00:03:07.879 test-fib: explicitly disabled via build config 00:03:07.879 test-flow-perf: explicitly disabled via build config 00:03:07.879 test-gpudev: explicitly disabled via build config 00:03:07.879 test-mldev: explicitly disabled via build config 00:03:07.879 test-pipeline: explicitly disabled via build config 00:03:07.879 test-pmd: explicitly disabled via build config 00:03:07.879 test-regex: explicitly disabled via build config 00:03:07.879 test-sad: explicitly disabled via build config 00:03:07.879 test-security-perf: explicitly disabled via build config 00:03:07.879 00:03:07.879 libs: 00:03:07.879 argparse: explicitly disabled via build config 00:03:07.879 metrics: explicitly disabled via build config 00:03:07.879 acl: explicitly disabled via build config 00:03:07.879 bbdev: explicitly disabled via build config 00:03:07.879 bitratestats: explicitly disabled via build config 00:03:07.879 bpf: explicitly disabled via build config 00:03:07.879 cfgfile: explicitly disabled via build config 00:03:07.879 distributor: explicitly disabled via build config 00:03:07.879 efd: explicitly disabled via build config 00:03:07.879 eventdev: explicitly disabled via build config 00:03:07.879 dispatcher: explicitly disabled via build config 00:03:07.879 gpudev: explicitly disabled via build config 00:03:07.879 gro: explicitly disabled via build config 00:03:07.879 gso: explicitly disabled via build config 00:03:07.879 ip_frag: explicitly disabled via build config 00:03:07.879 jobstats: explicitly disabled via build config 00:03:07.879 latencystats: explicitly disabled via build config 00:03:07.879 lpm: explicitly disabled via build config 00:03:07.879 member: explicitly disabled via build config 00:03:07.879 pcapng: explicitly disabled via build config 00:03:07.879 rawdev: explicitly disabled via build config 00:03:07.879 regexdev: explicitly disabled via build config 00:03:07.879 mldev: explicitly disabled via build config 00:03:07.879 rib: explicitly disabled via build config 00:03:07.879 sched: explicitly disabled via build config 00:03:07.879 stack: explicitly disabled via build config 00:03:07.879 ipsec: explicitly disabled via build config 00:03:07.879 pdcp: explicitly disabled via build config 00:03:07.879 fib: explicitly disabled via build config 00:03:07.879 port: explicitly disabled via build config 00:03:07.879 pdump: explicitly disabled via build config 00:03:07.879 table: explicitly disabled via build config 00:03:07.879 pipeline: explicitly disabled via build config 00:03:07.879 graph: explicitly disabled via build config 00:03:07.879 node: explicitly disabled via build config 00:03:07.879 00:03:07.879 drivers: 00:03:07.879 common/cpt: not in enabled drivers build config 00:03:07.879 common/dpaax: not in enabled drivers build config 00:03:07.879 common/iavf: not in enabled drivers build config 00:03:07.879 common/idpf: not in enabled drivers build config 00:03:07.879 common/ionic: not in enabled drivers build config 00:03:07.879 common/mvep: not in enabled drivers build config 00:03:07.880 common/octeontx: not in enabled drivers build config 00:03:07.880 bus/auxiliary: not in enabled drivers build config 00:03:07.880 bus/cdx: not in enabled drivers build config 00:03:07.880 bus/dpaa: not in enabled drivers build config 00:03:07.880 bus/fslmc: not in enabled drivers build config 00:03:07.880 bus/ifpga: not in enabled drivers build config 00:03:07.880 bus/platform: not in enabled drivers build config 00:03:07.880 bus/uacce: not in enabled drivers build config 00:03:07.880 bus/vmbus: not in enabled drivers build config 00:03:07.880 common/cnxk: not in enabled drivers build config 00:03:07.880 common/mlx5: not in enabled drivers build config 00:03:07.880 common/nfp: not in enabled drivers build config 00:03:07.880 common/nitrox: not in enabled drivers build config 00:03:07.880 common/qat: not in enabled drivers build config 00:03:07.880 common/sfc_efx: not in enabled drivers build config 00:03:07.880 mempool/bucket: not in enabled drivers build config 00:03:07.880 mempool/cnxk: not in enabled drivers build config 00:03:07.880 mempool/dpaa: not in enabled drivers build config 00:03:07.880 mempool/dpaa2: not in enabled drivers build config 00:03:07.880 mempool/octeontx: not in enabled drivers build config 00:03:07.880 mempool/stack: not in enabled drivers build config 00:03:07.880 dma/cnxk: not in enabled drivers build config 00:03:07.880 dma/dpaa: not in enabled drivers build config 00:03:07.880 dma/dpaa2: not in enabled drivers build config 00:03:07.880 dma/hisilicon: not in enabled drivers build config 00:03:07.880 dma/idxd: not in enabled drivers build config 00:03:07.880 dma/ioat: not in enabled drivers build config 00:03:07.880 dma/skeleton: not in enabled drivers build config 00:03:07.880 net/af_packet: not in enabled drivers build config 00:03:07.880 net/af_xdp: not in enabled drivers build config 00:03:07.880 net/ark: not in enabled drivers build config 00:03:07.880 net/atlantic: not in enabled drivers build config 00:03:07.880 net/avp: not in enabled drivers build config 00:03:07.880 net/axgbe: not in enabled drivers build config 00:03:07.880 net/bnx2x: not in enabled drivers build config 00:03:07.880 net/bnxt: not in enabled drivers build config 00:03:07.880 net/bonding: not in enabled drivers build config 00:03:07.880 net/cnxk: not in enabled drivers build config 00:03:07.880 net/cpfl: not in enabled drivers build config 00:03:07.880 net/cxgbe: not in enabled drivers build config 00:03:07.880 net/dpaa: not in enabled drivers build config 00:03:07.880 net/dpaa2: not in enabled drivers build config 00:03:07.880 net/e1000: not in enabled drivers build config 00:03:07.880 net/ena: not in enabled drivers build config 00:03:07.880 net/enetc: not in enabled drivers build config 00:03:07.880 net/enetfec: not in enabled drivers build config 00:03:07.880 net/enic: not in enabled drivers build config 00:03:07.880 net/failsafe: not in enabled drivers build config 00:03:07.880 net/fm10k: not in enabled drivers build config 00:03:07.880 net/gve: not in enabled drivers build config 00:03:07.880 net/hinic: not in enabled drivers build config 00:03:07.880 net/hns3: not in enabled drivers build config 00:03:07.880 net/i40e: not in enabled drivers build config 00:03:07.880 net/iavf: not in enabled drivers build config 00:03:07.880 net/ice: not in enabled drivers build config 00:03:07.880 net/idpf: not in enabled drivers build config 00:03:07.880 net/igc: not in enabled drivers build config 00:03:07.880 net/ionic: not in enabled drivers build config 00:03:07.880 net/ipn3ke: not in enabled drivers build config 00:03:07.880 net/ixgbe: not in enabled drivers build config 00:03:07.880 net/mana: not in enabled drivers build config 00:03:07.880 net/memif: not in enabled drivers build config 00:03:07.880 net/mlx4: not in enabled drivers build config 00:03:07.880 net/mlx5: not in enabled drivers build config 00:03:07.880 net/mvneta: not in enabled drivers build config 00:03:07.880 net/mvpp2: not in enabled drivers build config 00:03:07.880 net/netvsc: not in enabled drivers build config 00:03:07.880 net/nfb: not in enabled drivers build config 00:03:07.880 net/nfp: not in enabled drivers build config 00:03:07.880 net/ngbe: not in enabled drivers build config 00:03:07.880 net/null: not in enabled drivers build config 00:03:07.880 net/octeontx: not in enabled drivers build config 00:03:07.880 net/octeon_ep: not in enabled drivers build config 00:03:07.880 net/pcap: not in enabled drivers build config 00:03:07.880 net/pfe: not in enabled drivers build config 00:03:07.880 net/qede: not in enabled drivers build config 00:03:07.880 net/ring: not in enabled drivers build config 00:03:07.880 net/sfc: not in enabled drivers build config 00:03:07.880 net/softnic: not in enabled drivers build config 00:03:07.880 net/tap: not in enabled drivers build config 00:03:07.880 net/thunderx: not in enabled drivers build config 00:03:07.880 net/txgbe: not in enabled drivers build config 00:03:07.880 net/vdev_netvsc: not in enabled drivers build config 00:03:07.880 net/vhost: not in enabled drivers build config 00:03:07.880 net/virtio: not in enabled drivers build config 00:03:07.880 net/vmxnet3: not in enabled drivers build config 00:03:07.880 raw/*: missing internal dependency, "rawdev" 00:03:07.880 crypto/armv8: not in enabled drivers build config 00:03:07.880 crypto/bcmfs: not in enabled drivers build config 00:03:07.880 crypto/caam_jr: not in enabled drivers build config 00:03:07.880 crypto/ccp: not in enabled drivers build config 00:03:07.880 crypto/cnxk: not in enabled drivers build config 00:03:07.880 crypto/dpaa_sec: not in enabled drivers build config 00:03:07.880 crypto/dpaa2_sec: not in enabled drivers build config 00:03:07.880 crypto/ipsec_mb: not in enabled drivers build config 00:03:07.880 crypto/mlx5: not in enabled drivers build config 00:03:07.880 crypto/mvsam: not in enabled drivers build config 00:03:07.880 crypto/nitrox: not in enabled drivers build config 00:03:07.880 crypto/null: not in enabled drivers build config 00:03:07.880 crypto/octeontx: not in enabled drivers build config 00:03:07.880 crypto/openssl: not in enabled drivers build config 00:03:07.880 crypto/scheduler: not in enabled drivers build config 00:03:07.880 crypto/uadk: not in enabled drivers build config 00:03:07.880 crypto/virtio: not in enabled drivers build config 00:03:07.880 compress/isal: not in enabled drivers build config 00:03:07.880 compress/mlx5: not in enabled drivers build config 00:03:07.880 compress/nitrox: not in enabled drivers build config 00:03:07.880 compress/octeontx: not in enabled drivers build config 00:03:07.880 compress/zlib: not in enabled drivers build config 00:03:07.880 regex/*: missing internal dependency, "regexdev" 00:03:07.880 ml/*: missing internal dependency, "mldev" 00:03:07.880 vdpa/ifc: not in enabled drivers build config 00:03:07.880 vdpa/mlx5: not in enabled drivers build config 00:03:07.880 vdpa/nfp: not in enabled drivers build config 00:03:07.880 vdpa/sfc: not in enabled drivers build config 00:03:07.880 event/*: missing internal dependency, "eventdev" 00:03:07.880 baseband/*: missing internal dependency, "bbdev" 00:03:07.880 gpu/*: missing internal dependency, "gpudev" 00:03:07.880 00:03:07.880 00:03:07.880 Build targets in project: 85 00:03:07.880 00:03:07.880 DPDK 24.03.0 00:03:07.880 00:03:07.880 User defined options 00:03:07.880 buildtype : debug 00:03:07.880 default_library : static 00:03:07.880 libdir : lib 00:03:07.880 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:07.880 c_args : -fPIC -Werror 00:03:07.880 c_link_args : 00:03:07.880 cpu_instruction_set: native 00:03:07.880 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:07.880 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:07.880 enable_docs : false 00:03:07.880 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:07.880 enable_kmods : false 00:03:07.880 max_lcores : 128 00:03:07.880 tests : false 00:03:07.880 00:03:07.880 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.880 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:03:07.880 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:07.880 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:07.880 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:07.880 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.880 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.880 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:07.880 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.880 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:07.880 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.880 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.880 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:07.880 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:07.880 [13/268] Linking static target lib/librte_kvargs.a 00:03:07.880 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:07.880 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:07.880 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:07.880 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:07.880 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:07.880 [19/268] Linking static target lib/librte_log.a 00:03:08.141 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.141 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:08.141 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:08.141 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.141 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:08.141 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:08.141 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:08.141 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:08.141 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:08.141 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:08.141 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.141 [31/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:08.141 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:08.141 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:08.141 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.141 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:08.141 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.141 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.141 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:08.141 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.141 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.141 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.141 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.141 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.141 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.141 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.141 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:08.141 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:08.141 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.400 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:08.400 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:08.400 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.400 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:08.400 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:08.400 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:08.400 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:08.400 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.400 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:08.400 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:08.400 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.400 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.400 [61/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:08.400 [62/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:08.400 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.400 [64/268] Linking static target lib/librte_telemetry.a 00:03:08.400 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.400 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:08.400 [67/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:08.400 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:08.400 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:08.400 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:08.400 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:08.400 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.400 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.401 [74/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:08.401 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:08.401 [76/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:08.401 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:08.401 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:08.401 [79/268] Linking static target lib/librte_pci.a 00:03:08.401 [80/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:08.401 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:08.401 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:08.401 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:08.401 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:08.401 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.401 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:08.401 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:08.401 [88/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:08.401 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:08.401 [90/268] Linking static target lib/librte_ring.a 00:03:08.401 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.401 [92/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.401 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.401 [94/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:08.401 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.401 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:08.401 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:08.401 [98/268] Linking static target lib/librte_meter.a 00:03:08.401 [99/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:08.401 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:08.401 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:08.401 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:08.401 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:08.401 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:08.401 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:08.401 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:08.401 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.401 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:08.401 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:08.401 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.401 [111/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:08.401 [112/268] Linking target lib/librte_log.so.24.1 00:03:08.401 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:08.401 [114/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:08.401 [115/268] Linking static target lib/librte_eal.a 00:03:08.401 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:08.401 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:08.401 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:08.401 [119/268] Linking static target lib/librte_rcu.a 00:03:08.401 [120/268] Linking static target lib/librte_net.a 00:03:08.401 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:08.401 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:08.401 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:08.401 [124/268] Linking static target lib/librte_mempool.a 00:03:08.401 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:08.401 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:08.660 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:08.660 [128/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.660 [129/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:08.660 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:08.660 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.660 [132/268] Linking static target lib/librte_mbuf.a 00:03:08.660 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.660 [134/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:08.660 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:08.660 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:08.660 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:08.660 [138/268] Linking target lib/librte_kvargs.so.24.1 00:03:08.660 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:08.660 [140/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.660 [141/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.660 [142/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:08.660 [143/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.660 [144/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:08.660 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:08.661 [146/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:08.661 [147/268] Linking static target lib/librte_cmdline.a 00:03:08.661 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:08.661 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:08.661 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:08.661 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:08.661 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:08.661 [153/268] Linking target lib/librte_telemetry.so.24.1 00:03:08.661 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:08.661 [155/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:08.661 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:08.661 [157/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:08.661 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:08.920 [159/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:08.920 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:08.920 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:08.920 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:08.920 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:08.920 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:08.920 [165/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:08.920 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:08.920 [167/268] Linking static target lib/librte_compressdev.a 00:03:08.920 [168/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:08.920 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:08.920 [170/268] Linking static target lib/librte_security.a 00:03:08.920 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:08.920 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:08.920 [173/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:08.920 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:08.920 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:08.920 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:08.920 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:08.920 [178/268] Linking static target lib/librte_timer.a 00:03:08.920 [179/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:08.920 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.920 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:08.920 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:08.920 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:08.920 [184/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:08.920 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.920 [186/268] Linking static target lib/librte_dmadev.a 00:03:08.920 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:08.920 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:08.920 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:08.920 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:08.920 [191/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:08.920 [192/268] Linking static target lib/librte_reorder.a 00:03:08.920 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:08.920 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:08.920 [195/268] Linking static target lib/librte_power.a 00:03:08.920 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:08.920 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:09.180 [198/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:09.180 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.180 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.180 [201/268] Linking static target lib/librte_hash.a 00:03:09.180 [202/268] Linking static target drivers/librte_bus_vdev.a 00:03:09.180 [203/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.180 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.180 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.180 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.180 [207/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:09.180 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:09.180 [209/268] Linking static target drivers/librte_bus_pci.a 00:03:09.180 [210/268] Linking static target lib/librte_cryptodev.a 00:03:09.180 [211/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.180 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.180 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.180 [214/268] Linking static target drivers/librte_mempool_ring.a 00:03:09.180 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:09.440 [216/268] Linking static target lib/librte_ethdev.a 00:03:09.441 [217/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.441 [218/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:09.441 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.441 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.441 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.441 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.700 [223/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.960 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [229/268] Linking static target lib/librte_vhost.a 00:03:11.365 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.934 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.507 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.507 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.507 [234/268] Linking target lib/librte_eal.so.24.1 00:03:18.507 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:18.507 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:18.507 [237/268] Linking target lib/librte_timer.so.24.1 00:03:18.507 [238/268] Linking target lib/librte_meter.so.24.1 00:03:18.507 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:18.507 [240/268] Linking target lib/librte_ring.so.24.1 00:03:18.507 [241/268] Linking target lib/librte_pci.so.24.1 00:03:18.766 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:18.766 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:18.766 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:18.766 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:18.766 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:18.766 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:18.766 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:18.766 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:19.026 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:19.026 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:19.026 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:19.026 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:19.026 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:19.285 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:19.285 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:19.285 [257/268] Linking target lib/librte_net.so.24.1 00:03:19.285 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:19.285 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:19.285 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:19.285 [261/268] Linking target lib/librte_hash.so.24.1 00:03:19.285 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:19.285 [263/268] Linking target lib/librte_security.so.24.1 00:03:19.544 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:19.544 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:19.544 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:19.544 [267/268] Linking target lib/librte_power.so.24.1 00:03:19.544 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:19.544 INFO: autodetecting backend as ninja 00:03:19.544 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 88 00:03:20.482 CC lib/log/log.o 00:03:20.482 CC lib/log/log_flags.o 00:03:20.482 CC lib/log/log_deprecated.o 00:03:20.482 CC lib/ut_mock/mock.o 00:03:20.482 CC lib/ut/ut.o 00:03:20.741 LIB libspdk_ut.a 00:03:20.741 LIB libspdk_ut_mock.a 00:03:20.741 LIB libspdk_log.a 00:03:21.000 CC lib/dma/dma.o 00:03:21.000 CC lib/ioat/ioat.o 00:03:21.000 CC lib/util/base64.o 00:03:21.000 CC lib/util/cpuset.o 00:03:21.000 CC lib/util/bit_array.o 00:03:21.000 CXX lib/trace_parser/trace.o 00:03:21.000 CC lib/util/crc16.o 00:03:21.000 CC lib/util/crc32.o 00:03:21.000 CC lib/util/crc32c.o 00:03:21.000 CC lib/util/crc32_ieee.o 00:03:21.000 CC lib/util/crc64.o 00:03:21.000 CC lib/util/dif.o 00:03:21.000 CC lib/util/fd.o 00:03:21.000 CC lib/util/fd_group.o 00:03:21.000 CC lib/util/file.o 00:03:21.000 CC lib/util/hexlify.o 00:03:21.000 CC lib/util/iov.o 00:03:21.000 CC lib/util/math.o 00:03:21.000 CC lib/util/net.o 00:03:21.000 CC lib/util/pipe.o 00:03:21.000 CC lib/util/strerror_tls.o 00:03:21.000 CC lib/util/string.o 00:03:21.000 CC lib/util/uuid.o 00:03:21.000 CC lib/util/xor.o 00:03:21.000 CC lib/util/zipf.o 00:03:21.000 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.000 CC lib/vfio_user/host/vfio_user.o 00:03:21.000 LIB libspdk_dma.a 00:03:21.000 LIB libspdk_ioat.a 00:03:21.261 LIB libspdk_vfio_user.a 00:03:21.261 LIB libspdk_util.a 00:03:21.520 CC lib/vmd/vmd.o 00:03:21.520 CC lib/vmd/led.o 00:03:21.520 CC lib/json/json_parse.o 00:03:21.520 CC lib/json/json_util.o 00:03:21.520 CC lib/rdma_provider/common.o 00:03:21.520 CC lib/json/json_write.o 00:03:21.520 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:21.520 CC lib/rdma_utils/rdma_utils.o 00:03:21.520 CC lib/conf/conf.o 00:03:21.520 CC lib/env_dpdk/env.o 00:03:21.520 CC lib/env_dpdk/memory.o 00:03:21.520 CC lib/env_dpdk/pci.o 00:03:21.520 CC lib/idxd/idxd.o 00:03:21.520 CC lib/env_dpdk/init.o 00:03:21.520 CC lib/idxd/idxd_user.o 00:03:21.520 CC lib/env_dpdk/threads.o 00:03:21.520 CC lib/idxd/idxd_kernel.o 00:03:21.520 CC lib/env_dpdk/pci_ioat.o 00:03:21.520 CC lib/env_dpdk/pci_virtio.o 00:03:21.520 CC lib/env_dpdk/pci_vmd.o 00:03:21.520 CC lib/env_dpdk/pci_idxd.o 00:03:21.520 CC lib/env_dpdk/pci_event.o 00:03:21.520 CC lib/env_dpdk/sigbus_handler.o 00:03:21.520 CC lib/env_dpdk/pci_dpdk.o 00:03:21.520 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.520 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.520 LIB libspdk_trace_parser.a 00:03:21.779 LIB libspdk_rdma_provider.a 00:03:21.779 LIB libspdk_conf.a 00:03:21.779 LIB libspdk_rdma_utils.a 00:03:21.779 LIB libspdk_json.a 00:03:21.779 LIB libspdk_idxd.a 00:03:22.037 LIB libspdk_vmd.a 00:03:22.037 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.037 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.037 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.037 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.296 LIB libspdk_jsonrpc.a 00:03:22.553 LIB libspdk_env_dpdk.a 00:03:22.553 CC lib/rpc/rpc.o 00:03:22.553 LIB libspdk_rpc.a 00:03:22.813 CC lib/notify/notify.o 00:03:22.813 CC lib/notify/notify_rpc.o 00:03:22.813 CC lib/keyring/keyring.o 00:03:22.813 CC lib/keyring/keyring_rpc.o 00:03:22.813 CC lib/trace/trace.o 00:03:22.813 CC lib/trace/trace_flags.o 00:03:22.813 CC lib/trace/trace_rpc.o 00:03:23.072 LIB libspdk_notify.a 00:03:23.072 LIB libspdk_keyring.a 00:03:23.072 LIB libspdk_trace.a 00:03:23.331 CC lib/sock/sock.o 00:03:23.331 CC lib/sock/sock_rpc.o 00:03:23.331 CC lib/thread/thread.o 00:03:23.331 CC lib/thread/iobuf.o 00:03:23.590 LIB libspdk_sock.a 00:03:23.848 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:23.848 CC lib/nvme/nvme_ctrlr.o 00:03:23.848 CC lib/nvme/nvme_fabric.o 00:03:23.848 CC lib/nvme/nvme_ns_cmd.o 00:03:23.848 CC lib/nvme/nvme_pcie_common.o 00:03:23.848 CC lib/nvme/nvme_ns.o 00:03:23.848 CC lib/nvme/nvme_pcie.o 00:03:23.848 CC lib/nvme/nvme.o 00:03:23.848 CC lib/nvme/nvme_qpair.o 00:03:23.848 CC lib/nvme/nvme_transport.o 00:03:23.848 CC lib/nvme/nvme_quirks.o 00:03:23.848 CC lib/nvme/nvme_discovery.o 00:03:23.848 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:23.848 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:23.848 CC lib/nvme/nvme_tcp.o 00:03:23.848 CC lib/nvme/nvme_opal.o 00:03:23.848 CC lib/nvme/nvme_io_msg.o 00:03:23.848 CC lib/nvme/nvme_poll_group.o 00:03:23.848 CC lib/nvme/nvme_zns.o 00:03:23.848 CC lib/nvme/nvme_auth.o 00:03:23.848 CC lib/nvme/nvme_stubs.o 00:03:23.848 CC lib/nvme/nvme_cuse.o 00:03:23.848 CC lib/nvme/nvme_rdma.o 00:03:23.848 CC lib/nvme/nvme_vfio_user.o 00:03:24.107 LIB libspdk_thread.a 00:03:24.366 CC lib/blob/blobstore.o 00:03:24.366 CC lib/blob/zeroes.o 00:03:24.366 CC lib/blob/request.o 00:03:24.366 CC lib/blob/blob_bs_dev.o 00:03:24.366 CC lib/accel/accel_rpc.o 00:03:24.366 CC lib/accel/accel.o 00:03:24.366 CC lib/init/json_config.o 00:03:24.366 CC lib/init/subsystem.o 00:03:24.366 CC lib/accel/accel_sw.o 00:03:24.366 CC lib/init/subsystem_rpc.o 00:03:24.366 CC lib/init/rpc.o 00:03:24.366 CC lib/vfu_tgt/tgt_endpoint.o 00:03:24.366 CC lib/vfu_tgt/tgt_rpc.o 00:03:24.366 CC lib/virtio/virtio_vhost_user.o 00:03:24.366 CC lib/virtio/virtio.o 00:03:24.366 CC lib/virtio/virtio_vfio_user.o 00:03:24.366 CC lib/virtio/virtio_pci.o 00:03:24.624 LIB libspdk_init.a 00:03:24.624 LIB libspdk_vfu_tgt.a 00:03:24.624 LIB libspdk_virtio.a 00:03:24.883 CC lib/event/app.o 00:03:24.883 CC lib/event/reactor.o 00:03:24.883 CC lib/event/log_rpc.o 00:03:24.883 CC lib/event/app_rpc.o 00:03:24.883 CC lib/event/scheduler_static.o 00:03:25.142 LIB libspdk_event.a 00:03:25.142 LIB libspdk_accel.a 00:03:25.401 LIB libspdk_nvme.a 00:03:25.401 CC lib/bdev/bdev.o 00:03:25.401 CC lib/bdev/bdev_rpc.o 00:03:25.401 CC lib/bdev/bdev_zone.o 00:03:25.401 CC lib/bdev/part.o 00:03:25.401 CC lib/bdev/scsi_nvme.o 00:03:26.338 LIB libspdk_blob.a 00:03:26.597 CC lib/blobfs/blobfs.o 00:03:26.597 CC lib/blobfs/tree.o 00:03:26.597 CC lib/lvol/lvol.o 00:03:26.856 LIB libspdk_lvol.a 00:03:27.115 LIB libspdk_blobfs.a 00:03:27.115 LIB libspdk_bdev.a 00:03:27.374 CC lib/nbd/nbd.o 00:03:27.374 CC lib/nbd/nbd_rpc.o 00:03:27.374 CC lib/nvmf/ctrlr.o 00:03:27.374 CC lib/nvmf/ctrlr_discovery.o 00:03:27.374 CC lib/nvmf/subsystem.o 00:03:27.374 CC lib/nvmf/ctrlr_bdev.o 00:03:27.374 CC lib/nvmf/nvmf.o 00:03:27.374 CC lib/nvmf/nvmf_rpc.o 00:03:27.374 CC lib/nvmf/transport.o 00:03:27.374 CC lib/nvmf/tcp.o 00:03:27.374 CC lib/scsi/dev.o 00:03:27.374 CC lib/nvmf/stubs.o 00:03:27.374 CC lib/ftl/ftl_core.o 00:03:27.374 CC lib/scsi/lun.o 00:03:27.374 CC lib/ftl/ftl_init.o 00:03:27.374 CC lib/nvmf/mdns_server.o 00:03:27.374 CC lib/scsi/port.o 00:03:27.374 CC lib/nvmf/vfio_user.o 00:03:27.374 CC lib/ftl/ftl_layout.o 00:03:27.374 CC lib/nvmf/rdma.o 00:03:27.374 CC lib/scsi/scsi.o 00:03:27.374 CC lib/scsi/scsi_pr.o 00:03:27.374 CC lib/scsi/scsi_bdev.o 00:03:27.374 CC lib/ftl/ftl_debug.o 00:03:27.374 CC lib/nvmf/auth.o 00:03:27.374 CC lib/ftl/ftl_io.o 00:03:27.374 CC lib/scsi/scsi_rpc.o 00:03:27.374 CC lib/scsi/task.o 00:03:27.374 CC lib/ftl/ftl_sb.o 00:03:27.374 CC lib/ublk/ublk.o 00:03:27.374 CC lib/ftl/ftl_l2p.o 00:03:27.374 CC lib/ublk/ublk_rpc.o 00:03:27.374 CC lib/ftl/ftl_l2p_flat.o 00:03:27.374 CC lib/ftl/ftl_nv_cache.o 00:03:27.374 CC lib/ftl/ftl_band.o 00:03:27.374 CC lib/ftl/ftl_band_ops.o 00:03:27.374 CC lib/ftl/ftl_writer.o 00:03:27.374 CC lib/ftl/ftl_rq.o 00:03:27.374 CC lib/ftl/ftl_reloc.o 00:03:27.374 CC lib/ftl/ftl_l2p_cache.o 00:03:27.374 CC lib/ftl/ftl_p2l.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:27.374 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:27.374 CC lib/ftl/utils/ftl_md.o 00:03:27.374 CC lib/ftl/utils/ftl_conf.o 00:03:27.374 CC lib/ftl/utils/ftl_mempool.o 00:03:27.374 CC lib/ftl/utils/ftl_property.o 00:03:27.374 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.374 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:27.374 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:27.374 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:27.374 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:27.374 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:27.374 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:27.374 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:27.374 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:27.374 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:27.374 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:27.374 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:27.374 CC lib/ftl/base/ftl_base_dev.o 00:03:27.374 CC lib/ftl/base/ftl_base_bdev.o 00:03:27.374 CC lib/ftl/ftl_trace.o 00:03:27.941 LIB libspdk_nbd.a 00:03:27.941 LIB libspdk_ublk.a 00:03:27.941 LIB libspdk_scsi.a 00:03:28.201 LIB libspdk_ftl.a 00:03:28.201 CC lib/iscsi/conn.o 00:03:28.201 CC lib/iscsi/init_grp.o 00:03:28.201 CC lib/iscsi/iscsi.o 00:03:28.201 CC lib/iscsi/md5.o 00:03:28.201 CC lib/iscsi/param.o 00:03:28.201 CC lib/iscsi/portal_grp.o 00:03:28.201 CC lib/iscsi/tgt_node.o 00:03:28.201 CC lib/iscsi/iscsi_subsystem.o 00:03:28.201 CC lib/iscsi/iscsi_rpc.o 00:03:28.201 CC lib/iscsi/task.o 00:03:28.201 CC lib/vhost/vhost.o 00:03:28.201 CC lib/vhost/vhost_rpc.o 00:03:28.201 CC lib/vhost/vhost_scsi.o 00:03:28.201 CC lib/vhost/vhost_blk.o 00:03:28.201 CC lib/vhost/rte_vhost_user.o 00:03:28.769 LIB libspdk_nvmf.a 00:03:28.769 LIB libspdk_vhost.a 00:03:29.028 LIB libspdk_iscsi.a 00:03:29.287 CC module/vfu_device/vfu_virtio.o 00:03:29.287 CC module/vfu_device/vfu_virtio_blk.o 00:03:29.287 CC module/vfu_device/vfu_virtio_scsi.o 00:03:29.287 CC module/vfu_device/vfu_virtio_rpc.o 00:03:29.287 CC module/env_dpdk/env_dpdk_rpc.o 00:03:29.547 CC module/accel/iaa/accel_iaa.o 00:03:29.547 CC module/accel/iaa/accel_iaa_rpc.o 00:03:29.547 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:29.547 CC module/blob/bdev/blob_bdev.o 00:03:29.547 CC module/keyring/file/keyring_rpc.o 00:03:29.547 CC module/keyring/file/keyring.o 00:03:29.547 CC module/accel/error/accel_error.o 00:03:29.547 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.547 CC module/accel/error/accel_error_rpc.o 00:03:29.547 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.547 CC module/accel/ioat/accel_ioat.o 00:03:29.547 CC module/scheduler/gscheduler/gscheduler.o 00:03:29.547 CC module/accel/dsa/accel_dsa.o 00:03:29.547 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.547 CC module/keyring/linux/keyring.o 00:03:29.547 CC module/keyring/linux/keyring_rpc.o 00:03:29.547 LIB libspdk_env_dpdk_rpc.a 00:03:29.547 CC module/sock/posix/posix.o 00:03:29.547 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.547 LIB libspdk_keyring_linux.a 00:03:29.547 LIB libspdk_keyring_file.a 00:03:29.547 LIB libspdk_scheduler_gscheduler.a 00:03:29.547 LIB libspdk_accel_error.a 00:03:29.547 LIB libspdk_scheduler_dynamic.a 00:03:29.547 LIB libspdk_accel_iaa.a 00:03:29.547 LIB libspdk_accel_ioat.a 00:03:29.547 LIB libspdk_blob_bdev.a 00:03:29.806 LIB libspdk_accel_dsa.a 00:03:29.806 LIB libspdk_vfu_device.a 00:03:30.065 LIB libspdk_sock_posix.a 00:03:30.065 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.065 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.065 CC module/bdev/gpt/gpt.o 00:03:30.065 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.065 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.065 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.065 CC module/bdev/nvme/nvme_rpc.o 00:03:30.065 CC module/bdev/error/vbdev_error.o 00:03:30.065 CC module/bdev/nvme/bdev_nvme.o 00:03:30.065 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.065 CC module/bdev/nvme/vbdev_opal.o 00:03:30.065 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:30.065 CC module/bdev/malloc/bdev_malloc.o 00:03:30.065 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:30.065 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.065 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.065 CC module/bdev/null/bdev_null.o 00:03:30.065 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.065 CC module/bdev/raid/bdev_raid.o 00:03:30.065 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.065 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.065 CC module/bdev/raid/bdev_raid_rpc.o 00:03:30.065 CC module/bdev/delay/vbdev_delay.o 00:03:30.065 CC module/bdev/null/bdev_null_rpc.o 00:03:30.065 CC module/bdev/raid/raid0.o 00:03:30.065 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.065 CC module/bdev/split/vbdev_split.o 00:03:30.065 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.065 CC module/bdev/split/vbdev_split_rpc.o 00:03:30.065 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.065 CC module/bdev/raid/raid1.o 00:03:30.065 CC module/bdev/ftl/bdev_ftl.o 00:03:30.065 CC module/bdev/raid/concat.o 00:03:30.065 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.065 CC module/bdev/aio/bdev_aio.o 00:03:30.065 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.065 CC module/bdev/aio/bdev_aio_rpc.o 00:03:30.065 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.065 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.065 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:30.065 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:30.065 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.065 LIB libspdk_blobfs_bdev.a 00:03:30.324 LIB libspdk_bdev_split.a 00:03:30.324 LIB libspdk_bdev_gpt.a 00:03:30.324 LIB libspdk_bdev_error.a 00:03:30.324 LIB libspdk_bdev_null.a 00:03:30.324 LIB libspdk_bdev_ftl.a 00:03:30.324 LIB libspdk_bdev_passthru.a 00:03:30.324 LIB libspdk_bdev_aio.a 00:03:30.324 LIB libspdk_bdev_zone_block.a 00:03:30.324 LIB libspdk_bdev_iscsi.a 00:03:30.324 LIB libspdk_bdev_delay.a 00:03:30.324 LIB libspdk_bdev_malloc.a 00:03:30.324 LIB libspdk_bdev_lvol.a 00:03:30.324 LIB libspdk_bdev_virtio.a 00:03:30.583 LIB libspdk_bdev_raid.a 00:03:31.521 LIB libspdk_bdev_nvme.a 00:03:31.779 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.779 CC module/event/subsystems/keyring/keyring.o 00:03:31.779 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.779 CC module/event/subsystems/vmd/vmd.o 00:03:31.779 CC module/event/subsystems/sock/sock.o 00:03:31.779 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.779 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.779 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.779 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:32.038 LIB libspdk_event_keyring.a 00:03:32.038 LIB libspdk_event_scheduler.a 00:03:32.038 LIB libspdk_event_vmd.a 00:03:32.038 LIB libspdk_event_vhost_blk.a 00:03:32.038 LIB libspdk_event_sock.a 00:03:32.038 LIB libspdk_event_vfu_tgt.a 00:03:32.038 LIB libspdk_event_iobuf.a 00:03:32.297 CC module/event/subsystems/accel/accel.o 00:03:32.297 LIB libspdk_event_accel.a 00:03:32.557 CC module/event/subsystems/bdev/bdev.o 00:03:32.817 LIB libspdk_event_bdev.a 00:03:33.076 CC module/event/subsystems/ublk/ublk.o 00:03:33.076 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.076 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.076 CC module/event/subsystems/scsi/scsi.o 00:03:33.076 CC module/event/subsystems/nbd/nbd.o 00:03:33.076 LIB libspdk_event_ublk.a 00:03:33.076 LIB libspdk_event_nbd.a 00:03:33.076 LIB libspdk_event_scsi.a 00:03:33.076 LIB libspdk_event_nvmf.a 00:03:33.337 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.337 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.597 LIB libspdk_event_vhost_scsi.a 00:03:33.597 LIB libspdk_event_iscsi.a 00:03:33.860 TEST_HEADER include/spdk/accel.h 00:03:33.860 TEST_HEADER include/spdk/accel_module.h 00:03:33.860 TEST_HEADER include/spdk/assert.h 00:03:33.860 TEST_HEADER include/spdk/barrier.h 00:03:33.860 TEST_HEADER include/spdk/base64.h 00:03:33.860 CC app/spdk_lspci/spdk_lspci.o 00:03:33.860 TEST_HEADER include/spdk/bdev.h 00:03:33.860 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.860 TEST_HEADER include/spdk/bdev_module.h 00:03:33.860 TEST_HEADER include/spdk/bit_array.h 00:03:33.860 TEST_HEADER include/spdk/bit_pool.h 00:03:33.860 CC test/rpc_client/rpc_client_test.o 00:03:33.860 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.860 CC app/spdk_top/spdk_top.o 00:03:33.860 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.860 CC app/spdk_nvme_identify/identify.o 00:03:33.860 TEST_HEADER include/spdk/blobfs.h 00:03:33.860 TEST_HEADER include/spdk/blob.h 00:03:33.860 TEST_HEADER include/spdk/config.h 00:03:33.860 TEST_HEADER include/spdk/conf.h 00:03:33.860 TEST_HEADER include/spdk/cpuset.h 00:03:33.860 TEST_HEADER include/spdk/crc32.h 00:03:33.860 TEST_HEADER include/spdk/crc16.h 00:03:33.860 TEST_HEADER include/spdk/crc64.h 00:03:33.860 CC app/trace_record/trace_record.o 00:03:33.860 TEST_HEADER include/spdk/dma.h 00:03:33.860 TEST_HEADER include/spdk/dif.h 00:03:33.860 TEST_HEADER include/spdk/endian.h 00:03:33.860 CC app/spdk_nvme_perf/perf.o 00:03:33.860 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.860 CC app/spdk_nvme_discover/discovery_aer.o 00:03:33.860 TEST_HEADER include/spdk/env.h 00:03:33.860 TEST_HEADER include/spdk/event.h 00:03:33.860 TEST_HEADER include/spdk/fd_group.h 00:03:33.860 TEST_HEADER include/spdk/fd.h 00:03:33.860 TEST_HEADER include/spdk/file.h 00:03:33.860 CXX app/trace/trace.o 00:03:33.860 TEST_HEADER include/spdk/ftl.h 00:03:33.860 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.860 TEST_HEADER include/spdk/hexlify.h 00:03:33.860 TEST_HEADER include/spdk/histogram_data.h 00:03:33.860 TEST_HEADER include/spdk/idxd.h 00:03:33.860 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.860 TEST_HEADER include/spdk/ioat.h 00:03:33.860 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.860 TEST_HEADER include/spdk/init.h 00:03:33.860 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.860 TEST_HEADER include/spdk/json.h 00:03:33.860 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.860 TEST_HEADER include/spdk/keyring.h 00:03:33.860 TEST_HEADER include/spdk/keyring_module.h 00:03:33.860 TEST_HEADER include/spdk/likely.h 00:03:33.860 TEST_HEADER include/spdk/lvol.h 00:03:33.860 TEST_HEADER include/spdk/memory.h 00:03:33.860 TEST_HEADER include/spdk/log.h 00:03:33.860 TEST_HEADER include/spdk/mmio.h 00:03:33.860 TEST_HEADER include/spdk/net.h 00:03:33.860 TEST_HEADER include/spdk/notify.h 00:03:33.860 TEST_HEADER include/spdk/nbd.h 00:03:33.860 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.860 TEST_HEADER include/spdk/nvme.h 00:03:33.860 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.860 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.860 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.860 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.860 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.860 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.860 TEST_HEADER include/spdk/nvmf.h 00:03:33.860 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.860 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.860 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.860 TEST_HEADER include/spdk/opal.h 00:03:33.860 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.860 TEST_HEADER include/spdk/pipe.h 00:03:33.860 CC app/spdk_dd/spdk_dd.o 00:03:33.860 TEST_HEADER include/spdk/opal_spec.h 00:03:33.860 TEST_HEADER include/spdk/reduce.h 00:03:33.860 TEST_HEADER include/spdk/queue.h 00:03:33.860 TEST_HEADER include/spdk/rpc.h 00:03:33.860 CC app/nvmf_tgt/nvmf_main.o 00:03:33.860 TEST_HEADER include/spdk/pci_ids.h 00:03:33.860 TEST_HEADER include/spdk/scheduler.h 00:03:33.860 TEST_HEADER include/spdk/scsi.h 00:03:33.860 TEST_HEADER include/spdk/sock.h 00:03:33.860 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.860 TEST_HEADER include/spdk/stdinc.h 00:03:33.860 TEST_HEADER include/spdk/string.h 00:03:33.860 TEST_HEADER include/spdk/trace.h 00:03:33.860 TEST_HEADER include/spdk/thread.h 00:03:33.860 TEST_HEADER include/spdk/trace_parser.h 00:03:33.860 TEST_HEADER include/spdk/tree.h 00:03:33.860 TEST_HEADER include/spdk/ublk.h 00:03:33.860 TEST_HEADER include/spdk/uuid.h 00:03:33.860 TEST_HEADER include/spdk/util.h 00:03:33.860 TEST_HEADER include/spdk/version.h 00:03:33.860 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.860 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.860 TEST_HEADER include/spdk/vhost.h 00:03:33.860 TEST_HEADER include/spdk/vmd.h 00:03:33.860 TEST_HEADER include/spdk/zipf.h 00:03:33.860 TEST_HEADER include/spdk/xor.h 00:03:33.860 CXX test/cpp_headers/accel_module.o 00:03:33.860 CXX test/cpp_headers/assert.o 00:03:33.860 CXX test/cpp_headers/accel.o 00:03:33.860 CXX test/cpp_headers/barrier.o 00:03:33.860 CXX test/cpp_headers/base64.o 00:03:33.860 CXX test/cpp_headers/bdev_module.o 00:03:33.860 CXX test/cpp_headers/bdev.o 00:03:33.860 CXX test/cpp_headers/bdev_zone.o 00:03:33.860 CXX test/cpp_headers/bit_array.o 00:03:33.860 CXX test/cpp_headers/bit_pool.o 00:03:33.860 CXX test/cpp_headers/blob_bdev.o 00:03:33.860 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.860 CXX test/cpp_headers/blob.o 00:03:33.860 CXX test/cpp_headers/conf.o 00:03:33.860 CXX test/cpp_headers/blobfs.o 00:03:33.860 CXX test/cpp_headers/config.o 00:03:33.860 CXX test/cpp_headers/crc16.o 00:03:33.860 CXX test/cpp_headers/crc32.o 00:03:33.860 CXX test/cpp_headers/cpuset.o 00:03:33.860 CXX test/cpp_headers/crc64.o 00:03:33.860 CXX test/cpp_headers/dif.o 00:03:33.860 CXX test/cpp_headers/dma.o 00:03:33.860 CXX test/cpp_headers/endian.o 00:03:33.860 CXX test/cpp_headers/env_dpdk.o 00:03:33.860 CXX test/cpp_headers/env.o 00:03:33.860 CXX test/cpp_headers/fd_group.o 00:03:33.860 CXX test/cpp_headers/fd.o 00:03:33.860 CXX test/cpp_headers/event.o 00:03:33.860 CXX test/cpp_headers/ftl.o 00:03:33.860 CXX test/cpp_headers/file.o 00:03:33.860 CXX test/cpp_headers/gpt_spec.o 00:03:33.860 CXX test/cpp_headers/hexlify.o 00:03:33.860 CXX test/cpp_headers/histogram_data.o 00:03:33.860 CXX test/cpp_headers/idxd.o 00:03:33.860 CXX test/cpp_headers/idxd_spec.o 00:03:33.860 CXX test/cpp_headers/init.o 00:03:33.860 CXX test/cpp_headers/ioat.o 00:03:33.860 CXX test/cpp_headers/ioat_spec.o 00:03:33.860 CXX test/cpp_headers/iscsi_spec.o 00:03:33.860 CXX test/cpp_headers/json.o 00:03:33.860 CXX test/cpp_headers/jsonrpc.o 00:03:33.860 CXX test/cpp_headers/keyring.o 00:03:33.860 CXX test/cpp_headers/keyring_module.o 00:03:33.860 CXX test/cpp_headers/log.o 00:03:33.860 CXX test/cpp_headers/likely.o 00:03:33.860 CXX test/cpp_headers/lvol.o 00:03:33.860 CXX test/cpp_headers/memory.o 00:03:33.860 CXX test/cpp_headers/mmio.o 00:03:33.860 CXX test/cpp_headers/nbd.o 00:03:33.860 CXX test/cpp_headers/notify.o 00:03:33.860 CXX test/cpp_headers/net.o 00:03:33.860 CXX test/cpp_headers/nvme_intel.o 00:03:33.860 CXX test/cpp_headers/nvme.o 00:03:33.860 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.860 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.860 CXX test/cpp_headers/nvme_spec.o 00:03:33.861 CXX test/cpp_headers/nvme_zns.o 00:03:33.861 CC app/spdk_tgt/spdk_tgt.o 00:03:33.861 CC examples/ioat/verify/verify.o 00:03:33.861 CC test/thread/poller_perf/poller_perf.o 00:03:33.861 CC test/env/vtophys/vtophys.o 00:03:33.861 CC examples/ioat/perf/perf.o 00:03:33.861 CC examples/util/zipf/zipf.o 00:03:33.861 CC test/app/jsoncat/jsoncat.o 00:03:33.861 CC test/app/histogram_perf/histogram_perf.o 00:03:33.861 CC test/thread/lock/spdk_lock.o 00:03:33.861 CC test/env/memory/memory_ut.o 00:03:33.861 CC test/env/pci/pci_ut.o 00:03:33.861 CC test/app/stub/stub.o 00:03:33.861 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.120 CC app/fio/nvme/fio_plugin.o 00:03:34.120 CXX test/cpp_headers/nvmf_cmd.o 00:03:34.120 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.120 LINK spdk_lspci 00:03:34.120 CC test/app/bdev_svc/bdev_svc.o 00:03:34.120 CC test/dma/test_dma/test_dma.o 00:03:34.120 CC app/fio/bdev/fio_plugin.o 00:03:34.120 LINK rpc_client_test 00:03:34.120 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:34.120 CC test/env/mem_callbacks/mem_callbacks.o 00:03:34.120 LINK spdk_nvme_discover 00:03:34.120 CXX test/cpp_headers/nvmf.o 00:03:34.120 CXX test/cpp_headers/nvmf_spec.o 00:03:34.120 CXX test/cpp_headers/nvmf_transport.o 00:03:34.120 LINK jsoncat 00:03:34.120 CXX test/cpp_headers/opal.o 00:03:34.120 CXX test/cpp_headers/opal_spec.o 00:03:34.120 CXX test/cpp_headers/pci_ids.o 00:03:34.120 LINK vtophys 00:03:34.120 CXX test/cpp_headers/pipe.o 00:03:34.120 LINK interrupt_tgt 00:03:34.120 CXX test/cpp_headers/queue.o 00:03:34.120 CXX test/cpp_headers/reduce.o 00:03:34.120 LINK poller_perf 00:03:34.120 LINK spdk_trace_record 00:03:34.120 CXX test/cpp_headers/rpc.o 00:03:34.120 CXX test/cpp_headers/scheduler.o 00:03:34.120 CXX test/cpp_headers/scsi.o 00:03:34.120 LINK nvmf_tgt 00:03:34.120 CXX test/cpp_headers/scsi_spec.o 00:03:34.120 CXX test/cpp_headers/sock.o 00:03:34.120 CXX test/cpp_headers/stdinc.o 00:03:34.120 CXX test/cpp_headers/string.o 00:03:34.120 CXX test/cpp_headers/thread.o 00:03:34.120 CXX test/cpp_headers/trace.o 00:03:34.120 LINK histogram_perf 00:03:34.120 CXX test/cpp_headers/trace_parser.o 00:03:34.120 CXX test/cpp_headers/tree.o 00:03:34.120 CXX test/cpp_headers/ublk.o 00:03:34.120 CXX test/cpp_headers/util.o 00:03:34.120 CXX test/cpp_headers/uuid.o 00:03:34.120 CXX test/cpp_headers/version.o 00:03:34.120 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.120 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.120 CXX test/cpp_headers/vhost.o 00:03:34.120 CXX test/cpp_headers/vmd.o 00:03:34.120 CXX test/cpp_headers/xor.o 00:03:34.120 CXX test/cpp_headers/zipf.o 00:03:34.120 LINK zipf 00:03:34.120 LINK iscsi_tgt 00:03:34.120 LINK env_dpdk_post_init 00:03:34.120 LINK stub 00:03:34.120 LINK verify 00:03:34.379 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.379 LINK spdk_tgt 00:03:34.379 LINK ioat_perf 00:03:34.379 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.379 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.379 LINK bdev_svc 00:03:34.379 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:34.379 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:34.379 LINK spdk_trace 00:03:34.379 LINK test_dma 00:03:34.379 LINK spdk_dd 00:03:34.379 LINK pci_ut 00:03:34.638 LINK nvme_fuzz 00:03:34.638 LINK mem_callbacks 00:03:34.638 LINK llvm_vfio_fuzz 00:03:34.638 LINK spdk_nvme_identify 00:03:34.638 LINK vhost_fuzz 00:03:34.638 LINK spdk_bdev 00:03:34.638 LINK spdk_nvme_perf 00:03:34.896 LINK spdk_nvme 00:03:34.896 CC app/vhost/vhost.o 00:03:34.896 CC examples/idxd/perf/perf.o 00:03:34.896 CC examples/sock/hello_world/hello_sock.o 00:03:34.896 LINK spdk_top 00:03:34.896 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.896 CC examples/vmd/led/led.o 00:03:34.896 CC examples/thread/thread/thread_ex.o 00:03:34.896 LINK llvm_nvme_fuzz 00:03:34.896 LINK lsvmd 00:03:34.896 LINK led 00:03:34.896 LINK vhost 00:03:34.896 LINK memory_ut 00:03:34.896 LINK hello_sock 00:03:35.155 LINK idxd_perf 00:03:35.155 LINK thread 00:03:35.155 LINK spdk_lock 00:03:35.414 LINK iscsi_fuzz 00:03:35.672 CC examples/nvme/reconnect/reconnect.o 00:03:35.672 CC examples/nvme/arbitration/arbitration.o 00:03:35.672 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.672 CC examples/nvme/abort/abort.o 00:03:35.672 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.672 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:35.672 CC examples/nvme/hotplug/hotplug.o 00:03:35.672 CC examples/nvme/hello_world/hello_world.o 00:03:35.672 CC test/event/reactor/reactor.o 00:03:35.672 CC test/event/reactor_perf/reactor_perf.o 00:03:35.672 CC test/event/app_repeat/app_repeat.o 00:03:35.672 CC test/event/event_perf/event_perf.o 00:03:35.930 LINK pmr_persistence 00:03:35.930 LINK cmb_copy 00:03:35.930 CC test/event/scheduler/scheduler.o 00:03:35.930 LINK hello_world 00:03:35.930 LINK hotplug 00:03:35.930 LINK reactor 00:03:35.930 LINK reactor_perf 00:03:35.930 LINK event_perf 00:03:35.930 LINK reconnect 00:03:35.930 LINK abort 00:03:35.930 LINK app_repeat 00:03:35.930 LINK arbitration 00:03:35.930 LINK nvme_manage 00:03:35.930 LINK scheduler 00:03:36.189 CC test/nvme/aer/aer.o 00:03:36.189 CC test/nvme/sgl/sgl.o 00:03:36.189 CC test/nvme/fdp/fdp.o 00:03:36.189 CC test/nvme/overhead/overhead.o 00:03:36.189 CC test/nvme/simple_copy/simple_copy.o 00:03:36.189 CC test/nvme/e2edp/nvme_dp.o 00:03:36.189 CC test/nvme/err_injection/err_injection.o 00:03:36.189 CC test/nvme/startup/startup.o 00:03:36.189 CC test/nvme/connect_stress/connect_stress.o 00:03:36.189 CC test/nvme/reserve/reserve.o 00:03:36.189 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.189 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.189 CC test/nvme/cuse/cuse.o 00:03:36.189 CC test/nvme/reset/reset.o 00:03:36.189 CC test/nvme/boot_partition/boot_partition.o 00:03:36.189 CC test/nvme/compliance/nvme_compliance.o 00:03:36.189 CC test/accel/dif/dif.o 00:03:36.189 CC test/blobfs/mkfs/mkfs.o 00:03:36.189 CC test/lvol/esnap/esnap.o 00:03:36.189 LINK startup 00:03:36.189 LINK connect_stress 00:03:36.448 LINK err_injection 00:03:36.448 LINK boot_partition 00:03:36.448 LINK fused_ordering 00:03:36.448 LINK reserve 00:03:36.448 LINK doorbell_aers 00:03:36.448 LINK simple_copy 00:03:36.448 LINK aer 00:03:36.448 LINK sgl 00:03:36.448 LINK nvme_dp 00:03:36.448 LINK overhead 00:03:36.448 LINK fdp 00:03:36.448 LINK reset 00:03:36.448 LINK mkfs 00:03:36.448 LINK nvme_compliance 00:03:36.448 LINK dif 00:03:36.707 CC examples/accel/perf/accel_perf.o 00:03:36.707 CC examples/blob/hello_world/hello_blob.o 00:03:36.707 CC examples/blob/cli/blobcli.o 00:03:36.965 LINK hello_blob 00:03:36.965 LINK accel_perf 00:03:36.965 LINK blobcli 00:03:37.224 LINK cuse 00:03:37.791 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.791 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.791 LINK hello_bdev 00:03:38.050 CC test/bdev/bdevio/bdevio.o 00:03:38.050 LINK bdevperf 00:03:38.309 LINK bdevio 00:03:39.688 CC examples/nvmf/nvmf/nvmf.o 00:03:39.688 LINK esnap 00:03:39.688 LINK nvmf 00:03:41.068 00:03:41.068 real 0m41.894s 00:03:41.068 user 6m3.940s 00:03:41.068 sys 2m3.843s 00:03:41.068 15:51:58 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:41.068 15:51:58 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.068 ************************************ 00:03:41.068 END TEST make 00:03:41.068 ************************************ 00:03:41.068 15:51:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.068 15:51:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.068 15:51:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.068 15:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.068 15:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.068 15:51:58 -- pm/common@44 -- $ pid=30202 00:03:41.068 15:51:58 -- pm/common@50 -- $ kill -TERM 30202 00:03:41.068 15:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.068 15:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.068 15:51:58 -- pm/common@44 -- $ pid=30204 00:03:41.068 15:51:58 -- pm/common@50 -- $ kill -TERM 30204 00:03:41.068 15:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.068 15:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:41.068 15:51:58 -- pm/common@44 -- $ pid=30205 00:03:41.068 15:51:58 -- pm/common@50 -- $ kill -TERM 30205 00:03:41.068 15:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.068 15:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:41.068 15:51:58 -- pm/common@44 -- $ pid=30229 00:03:41.068 15:51:58 -- pm/common@50 -- $ sudo -E kill -TERM 30229 00:03:41.068 15:51:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.068 15:51:58 -- nvmf/common.sh@7 -- # uname -s 00:03:41.068 15:51:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.068 15:51:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.068 15:51:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.068 15:51:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.068 15:51:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.068 15:51:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.068 15:51:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.068 15:51:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.068 15:51:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.068 15:51:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.068 15:51:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:03:41.068 15:51:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:03:41.068 15:51:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.068 15:51:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.068 15:51:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:41.068 15:51:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.068 15:51:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:41.068 15:51:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.068 15:51:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.068 15:51:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.068 15:51:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.068 15:51:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.068 15:51:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.068 15:51:58 -- paths/export.sh@5 -- # export PATH 00:03:41.068 15:51:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.068 15:51:58 -- nvmf/common.sh@47 -- # : 0 00:03:41.068 15:51:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:41.068 15:51:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:41.068 15:51:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.068 15:51:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.068 15:51:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.068 15:51:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:41.068 15:51:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:41.068 15:51:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:41.068 15:51:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.068 15:51:58 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.068 15:51:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.068 15:51:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.068 15:51:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:41.068 15:51:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.068 15:51:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:41.068 15:51:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.068 15:51:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.068 15:51:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.068 15:51:58 -- spdk/autotest.sh@48 -- # udevadm_pid=90938 00:03:41.068 15:51:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.068 15:51:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.068 15:51:58 -- pm/common@17 -- # local monitor 00:03:41.069 15:51:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.069 15:51:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.069 15:51:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.069 15:51:58 -- pm/common@21 -- # date +%s 00:03:41.069 15:51:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.069 15:51:58 -- pm/common@21 -- # date +%s 00:03:41.069 15:51:58 -- pm/common@25 -- # sleep 1 00:03:41.069 15:51:58 -- pm/common@21 -- # date +%s 00:03:41.069 15:51:58 -- pm/common@21 -- # date +%s 00:03:41.069 15:51:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721915518 00:03:41.069 15:51:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721915518 00:03:41.069 15:51:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721915518 00:03:41.069 15:51:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721915518 00:03:41.069 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721915518_collect-vmstat.pm.log 00:03:41.069 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721915518_collect-cpu-temp.pm.log 00:03:41.069 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721915518_collect-cpu-load.pm.log 00:03:41.069 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721915518_collect-bmc-pm.bmc.pm.log 00:03:42.009 15:51:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.009 15:51:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.009 15:51:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.009 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:03:42.009 15:51:59 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.009 15:51:59 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:42.009 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:03:42.009 15:51:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:42.009 15:51:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:42.009 15:51:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:42.268 15:51:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:42.268 15:51:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:42.268 15:51:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.268 15:52:00 -- common/autotest_common.sh@1455 -- # uname 00:03:42.268 15:52:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:42.268 15:52:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.268 15:52:00 -- common/autotest_common.sh@1475 -- # uname 00:03:42.268 15:52:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:42.268 15:52:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:42.268 15:52:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:42.268 15:52:00 -- spdk/autotest.sh@72 -- # hash lcov 00:03:42.268 15:52:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:42.268 15:52:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:42.268 15:52:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.268 15:52:00 -- common/autotest_common.sh@10 -- # set +x 00:03:42.268 15:52:00 -- spdk/autotest.sh@91 -- # rm -f 00:03:42.268 15:52:00 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.558 0000:dd:00.0 (8086 0a54): Already using the nvme driver 00:03:45.558 0000:df:00.0 (8086 0a54): Already using the nvme driver 00:03:45.558 0000:de:00.0 (8086 0953): Already using the nvme driver 00:03:45.817 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:45.817 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:46.076 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:46.076 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:46.076 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:46.076 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:46.076 0000:dc:00.0 (8086 0953): Already using the nvme driver 00:03:47.451 15:52:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:47.451 15:52:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:47.451 15:52:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:47.451 15:52:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:47.451 15:52:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.451 15:52:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.451 15:52:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.451 15:52:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.451 15:52:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.451 15:52:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.451 15:52:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.451 15:52:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:47.451 15:52:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.451 15:52:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.451 15:52:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:47.451 15:52:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:47.451 15:52:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.451 15:52:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:47.451 15:52:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.451 15:52:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.451 15:52:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:47.451 15:52:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:47.451 15:52:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.451 No valid GPT data, bailing 00:03:47.451 15:52:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.451 15:52:05 -- scripts/common.sh@391 -- # pt= 00:03:47.452 15:52:05 -- scripts/common.sh@392 -- # return 1 00:03:47.452 15:52:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.452 1+0 records in 00:03:47.452 1+0 records out 00:03:47.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00321942 s, 326 MB/s 00:03:47.452 15:52:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.452 15:52:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.452 15:52:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:47.452 15:52:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:47.452 15:52:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:47.452 No valid GPT data, bailing 00:03:47.452 15:52:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.452 15:52:05 -- scripts/common.sh@391 -- # pt= 00:03:47.452 15:52:05 -- scripts/common.sh@392 -- # return 1 00:03:47.452 15:52:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:47.452 1+0 records in 00:03:47.452 1+0 records out 00:03:47.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054922 s, 191 MB/s 00:03:47.452 15:52:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.452 15:52:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.452 15:52:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:47.452 15:52:05 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:47.452 15:52:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:47.452 No valid GPT data, bailing 00:03:47.452 15:52:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:47.452 15:52:05 -- scripts/common.sh@391 -- # pt= 00:03:47.452 15:52:05 -- scripts/common.sh@392 -- # return 1 00:03:47.452 15:52:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:47.452 1+0 records in 00:03:47.452 1+0 records out 00:03:47.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396718 s, 264 MB/s 00:03:47.452 15:52:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.452 15:52:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.452 15:52:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:03:47.452 15:52:05 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:03:47.452 15:52:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:47.711 No valid GPT data, bailing 00:03:47.711 15:52:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:47.711 15:52:05 -- scripts/common.sh@391 -- # pt= 00:03:47.711 15:52:05 -- scripts/common.sh@392 -- # return 1 00:03:47.711 15:52:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:47.711 1+0 records in 00:03:47.711 1+0 records out 00:03:47.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447701 s, 234 MB/s 00:03:47.711 15:52:05 -- spdk/autotest.sh@118 -- # sync 00:03:47.711 15:52:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.711 15:52:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.711 15:52:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:52.986 15:52:10 -- spdk/autotest.sh@124 -- # uname -s 00:03:52.986 15:52:10 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:52.986 15:52:10 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:52.986 15:52:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.986 15:52:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.986 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:03:52.986 ************************************ 00:03:52.986 START TEST setup.sh 00:03:52.986 ************************************ 00:03:52.986 15:52:10 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:52.986 * Looking for test storage... 00:03:52.986 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:52.987 15:52:10 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:52.987 15:52:10 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:52.987 15:52:10 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:52.987 15:52:10 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.987 15:52:10 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.987 15:52:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.987 ************************************ 00:03:52.987 START TEST acl 00:03:52.987 ************************************ 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:52.987 * Looking for test storage... 00:03:52.987 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:52.987 15:52:10 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:52.987 15:52:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.987 15:52:10 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:52.987 15:52:10 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:52.987 15:52:10 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:52.987 15:52:10 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:52.987 15:52:10 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:52.987 15:52:10 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.987 15:52:10 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.266 15:52:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:58.266 15:52:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:58.266 15:52:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.266 15:52:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:58.266 15:52:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.266 15:52:15 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:01.561 Hugepages 00:04:01.561 node hugesize free / total 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 00:04:01.561 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.561 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dc:00.0 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dd:00.0 == *:*:*.* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.562 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:de:00.0 == *:*:*.* ]] 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:df:00.0 == *:*:*.* ]] 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:01.822 15:52:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:01.822 15:52:19 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.822 15:52:19 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.822 15:52:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.822 ************************************ 00:04:01.822 START TEST denied 00:04:01.822 ************************************ 00:04:01.822 15:52:19 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:01.822 15:52:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:dc:00.0' 00:04:01.822 15:52:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:01.822 15:52:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:dc:00.0' 00:04:01.822 15:52:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.822 15:52:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:09.945 0000:dc:00.0 (8086 0953): Skipping denied controller at 0000:dc:00.0 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:dc:00.0 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dc:00.0 ]] 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dc:00.0/driver 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.945 15:52:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.072 00:04:18.072 real 0m15.877s 00:04:18.072 user 0m3.771s 00:04:18.072 sys 0m6.765s 00:04:18.072 15:52:35 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.072 15:52:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:18.072 ************************************ 00:04:18.072 END TEST denied 00:04:18.072 ************************************ 00:04:18.072 15:52:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:18.072 15:52:35 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.072 15:52:35 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.072 15:52:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:18.072 ************************************ 00:04:18.072 START TEST allowed 00:04:18.072 ************************************ 00:04:18.072 15:52:35 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:18.072 15:52:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:dc:00.0 00:04:18.072 15:52:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:18.072 15:52:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:dc:00.0 .*: nvme -> .*' 00:04:18.072 15:52:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.072 15:52:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:26.193 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dd:00.0 ]] 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dd:00.0/driver 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:de:00.0 ]] 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:de:00.0/driver 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:df:00.0 ]] 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:df:00.0/driver 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.193 15:52:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.766 00:04:32.766 real 0m14.886s 00:04:32.766 user 0m3.710s 00:04:32.766 sys 0m6.581s 00:04:32.766 15:52:50 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.766 15:52:50 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:32.766 ************************************ 00:04:32.766 END TEST allowed 00:04:32.766 ************************************ 00:04:32.766 00:04:32.766 real 0m40.151s 00:04:32.766 user 0m10.951s 00:04:32.766 sys 0m19.445s 00:04:32.766 15:52:50 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.766 15:52:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.766 ************************************ 00:04:32.766 END TEST acl 00:04:32.766 ************************************ 00:04:32.766 15:52:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:32.766 15:52:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.766 15:52:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.766 15:52:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.766 ************************************ 00:04:32.766 START TEST hugepages 00:04:32.766 ************************************ 00:04:32.766 15:52:50 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:32.766 * Looking for test storage... 00:04:32.766 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 42188712 kB' 'MemAvailable: 45601584 kB' 'Buffers: 11052 kB' 'Cached: 10754360 kB' 'SwapCached: 0 kB' 'Active: 8160612 kB' 'Inactive: 3425260 kB' 'Active(anon): 7772756 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 824028 kB' 'Mapped: 145940 kB' 'Shmem: 6952296 kB' 'KReclaimable: 193816 kB' 'Slab: 572848 kB' 'SReclaimable: 193816 kB' 'SUnreclaim: 379032 kB' 'KernelStack: 18800 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36438936 kB' 'Committed_AS: 9582772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207704 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.766 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.767 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.028 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:33.029 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:33.030 15:52:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:33.030 15:52:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.030 15:52:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.030 15:52:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.030 ************************************ 00:04:33.030 START TEST default_setup 00:04:33.030 ************************************ 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.030 15:52:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:36.330 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.330 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.590 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.499 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.499 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.499 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:04:38.758 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44434648 kB' 'MemAvailable: 47845728 kB' 'Buffers: 11052 kB' 'Cached: 10754516 kB' 'SwapCached: 0 kB' 'Active: 8186428 kB' 'Inactive: 3425260 kB' 'Active(anon): 7798572 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849352 kB' 'Mapped: 146308 kB' 'Shmem: 6952452 kB' 'KReclaimable: 190232 kB' 'Slab: 565188 kB' 'SReclaimable: 190232 kB' 'SUnreclaim: 374956 kB' 'KernelStack: 19040 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9612256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207928 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.671 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44433324 kB' 'MemAvailable: 47844404 kB' 'Buffers: 11052 kB' 'Cached: 10754520 kB' 'SwapCached: 0 kB' 'Active: 8187048 kB' 'Inactive: 3425260 kB' 'Active(anon): 7799192 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850112 kB' 'Mapped: 146352 kB' 'Shmem: 6952456 kB' 'KReclaimable: 190232 kB' 'Slab: 565036 kB' 'SReclaimable: 190232 kB' 'SUnreclaim: 374804 kB' 'KernelStack: 19232 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9611008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207848 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44434452 kB' 'MemAvailable: 47845532 kB' 'Buffers: 11052 kB' 'Cached: 10754536 kB' 'SwapCached: 0 kB' 'Active: 8186908 kB' 'Inactive: 3425260 kB' 'Active(anon): 7799052 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849964 kB' 'Mapped: 146240 kB' 'Shmem: 6952472 kB' 'KReclaimable: 190232 kB' 'Slab: 564956 kB' 'SReclaimable: 190232 kB' 'SUnreclaim: 374724 kB' 'KernelStack: 19024 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9611028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207880 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.674 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.675 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.676 nr_hugepages=1024 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.676 resv_hugepages=0 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.676 surplus_hugepages=0 00:04:40.676 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.677 anon_hugepages=0 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44434432 kB' 'MemAvailable: 47845512 kB' 'Buffers: 11052 kB' 'Cached: 10754560 kB' 'SwapCached: 0 kB' 'Active: 8188276 kB' 'Inactive: 3425260 kB' 'Active(anon): 7800420 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850780 kB' 'Mapped: 146744 kB' 'Shmem: 6952496 kB' 'KReclaimable: 190232 kB' 'Slab: 564956 kB' 'SReclaimable: 190232 kB' 'SUnreclaim: 374724 kB' 'KernelStack: 19008 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9613464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207928 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.677 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.678 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 19627600 kB' 'MemUsed: 13008052 kB' 'SwapCached: 0 kB' 'Active: 6082964 kB' 'Inactive: 3273012 kB' 'Active(anon): 5897508 kB' 'Inactive(anon): 0 kB' 'Active(file): 185456 kB' 'Inactive(file): 3273012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8659900 kB' 'Mapped: 103428 kB' 'AnonPages: 699364 kB' 'Shmem: 5201432 kB' 'KernelStack: 11448 kB' 'PageTables: 5724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107552 kB' 'Slab: 352936 kB' 'SReclaimable: 107552 kB' 'SUnreclaim: 245384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.679 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.680 node0=1024 expecting 1024 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.680 00:04:40.680 real 0m7.552s 00:04:40.680 user 0m2.005s 00:04:40.680 sys 0m3.398s 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.680 15:52:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:40.680 ************************************ 00:04:40.680 END TEST default_setup 00:04:40.680 ************************************ 00:04:40.680 15:52:58 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:40.680 15:52:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.680 15:52:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.680 15:52:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.680 ************************************ 00:04:40.680 START TEST per_node_1G_alloc 00:04:40.680 ************************************ 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:40.680 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.681 15:52:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:43.974 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.974 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.974 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:43.974 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:43.974 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:45.358 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:45.358 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44436704 kB' 'MemAvailable: 47847924 kB' 'Buffers: 11052 kB' 'Cached: 10754680 kB' 'SwapCached: 0 kB' 'Active: 8187948 kB' 'Inactive: 3425260 kB' 'Active(anon): 7800092 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850804 kB' 'Mapped: 145184 kB' 'Shmem: 6952616 kB' 'KReclaimable: 190512 kB' 'Slab: 565588 kB' 'SReclaimable: 190512 kB' 'SUnreclaim: 375076 kB' 'KernelStack: 18832 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9592840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207800 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44436928 kB' 'MemAvailable: 47848144 kB' 'Buffers: 11052 kB' 'Cached: 10754684 kB' 'SwapCached: 0 kB' 'Active: 8187872 kB' 'Inactive: 3425260 kB' 'Active(anon): 7800016 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850812 kB' 'Mapped: 145236 kB' 'Shmem: 6952620 kB' 'KReclaimable: 190504 kB' 'Slab: 565548 kB' 'SReclaimable: 190504 kB' 'SUnreclaim: 375044 kB' 'KernelStack: 18816 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9593224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207768 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44436992 kB' 'MemAvailable: 47848208 kB' 'Buffers: 11052 kB' 'Cached: 10754704 kB' 'SwapCached: 0 kB' 'Active: 8187816 kB' 'Inactive: 3425260 kB' 'Active(anon): 7799960 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850632 kB' 'Mapped: 145176 kB' 'Shmem: 6952640 kB' 'KReclaimable: 190504 kB' 'Slab: 565604 kB' 'SReclaimable: 190504 kB' 'SUnreclaim: 375100 kB' 'KernelStack: 18784 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9593248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207768 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.364 nr_hugepages=1024 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.364 resv_hugepages=0 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.364 surplus_hugepages=0 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.364 anon_hugepages=0 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.364 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44437584 kB' 'MemAvailable: 47848800 kB' 'Buffers: 11052 kB' 'Cached: 10754744 kB' 'SwapCached: 0 kB' 'Active: 8187496 kB' 'Inactive: 3425260 kB' 'Active(anon): 7799640 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850272 kB' 'Mapped: 145176 kB' 'Shmem: 6952680 kB' 'KReclaimable: 190504 kB' 'Slab: 565604 kB' 'SReclaimable: 190504 kB' 'SUnreclaim: 375100 kB' 'KernelStack: 18784 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9593268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207768 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.366 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 20670376 kB' 'MemUsed: 11965276 kB' 'SwapCached: 0 kB' 'Active: 6079344 kB' 'Inactive: 3273012 kB' 'Active(anon): 5893888 kB' 'Inactive(anon): 0 kB' 'Active(file): 185456 kB' 'Inactive(file): 3273012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8660056 kB' 'Mapped: 102672 kB' 'AnonPages: 695548 kB' 'Shmem: 5201588 kB' 'KernelStack: 11432 kB' 'PageTables: 5540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107552 kB' 'Slab: 352952 kB' 'SReclaimable: 107552 kB' 'SUnreclaim: 245400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.367 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659316 kB' 'MemFree: 23767208 kB' 'MemUsed: 3892108 kB' 'SwapCached: 0 kB' 'Active: 2108568 kB' 'Inactive: 152248 kB' 'Active(anon): 1906168 kB' 'Inactive(anon): 0 kB' 'Active(file): 202400 kB' 'Inactive(file): 152248 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2105744 kB' 'Mapped: 42504 kB' 'AnonPages: 155124 kB' 'Shmem: 1751096 kB' 'KernelStack: 7368 kB' 'PageTables: 2664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82952 kB' 'Slab: 212652 kB' 'SReclaimable: 82952 kB' 'SUnreclaim: 129700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.370 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:45.371 node0=512 expecting 512 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:45.371 node1=512 expecting 512 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:45.371 00:04:45.371 real 0m4.867s 00:04:45.371 user 0m1.863s 00:04:45.371 sys 0m3.073s 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.371 15:53:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.371 ************************************ 00:04:45.371 END TEST per_node_1G_alloc 00:04:45.371 ************************************ 00:04:45.371 15:53:03 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:45.371 15:53:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.371 15:53:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.371 15:53:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.630 ************************************ 00:04:45.630 START TEST even_2G_alloc 00:04:45.630 ************************************ 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.630 15:53:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:48.922 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:48.922 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:48.922 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:48.922 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:48.922 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44430620 kB' 'MemAvailable: 47841836 kB' 'Buffers: 11052 kB' 'Cached: 10754860 kB' 'SwapCached: 0 kB' 'Active: 8193064 kB' 'Inactive: 3425260 kB' 'Active(anon): 7805208 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 855668 kB' 'Mapped: 145224 kB' 'Shmem: 6952796 kB' 'KReclaimable: 190504 kB' 'Slab: 565592 kB' 'SReclaimable: 190504 kB' 'SUnreclaim: 375088 kB' 'KernelStack: 18816 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9593904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207800 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.306 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44431584 kB' 'MemAvailable: 47842800 kB' 'Buffers: 11052 kB' 'Cached: 10754864 kB' 'SwapCached: 0 kB' 'Active: 8192932 kB' 'Inactive: 3425260 kB' 'Active(anon): 7805076 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 855564 kB' 'Mapped: 145172 kB' 'Shmem: 6952800 kB' 'KReclaimable: 190504 kB' 'Slab: 565564 kB' 'SReclaimable: 190504 kB' 'SUnreclaim: 375060 kB' 'KernelStack: 18816 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9593920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207800 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.307 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44434788 kB' 'MemAvailable: 47846004 kB' 'Buffers: 11052 kB' 'Cached: 10754884 kB' 'SwapCached: 0 kB' 'Active: 8192408 kB' 'Inactive: 3425260 kB' 'Active(anon): 7804552 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 855056 kB' 'Mapped: 145172 kB' 'Shmem: 6952820 kB' 'KReclaimable: 190504 kB' 'Slab: 565564 kB' 'SReclaimable: 190504 kB' 'SUnreclaim: 375060 kB' 'KernelStack: 18784 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9593704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207736 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.308 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.309 nr_hugepages=1024 00:04:50.309 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.309 resv_hugepages=0 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.310 surplus_hugepages=0 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.310 anon_hugepages=0 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44434684 kB' 'MemAvailable: 47845900 kB' 'Buffers: 11052 kB' 'Cached: 10754924 kB' 'SwapCached: 0 kB' 'Active: 8192384 kB' 'Inactive: 3425260 kB' 'Active(anon): 7804528 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 854904 kB' 'Mapped: 145172 kB' 'Shmem: 6952860 kB' 'KReclaimable: 190504 kB' 'Slab: 565564 kB' 'SReclaimable: 190504 kB' 'SUnreclaim: 375060 kB' 'KernelStack: 18752 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9593728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207736 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.310 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 20681900 kB' 'MemUsed: 11953752 kB' 'SwapCached: 0 kB' 'Active: 6083376 kB' 'Inactive: 3273012 kB' 'Active(anon): 5897920 kB' 'Inactive(anon): 0 kB' 'Active(file): 185456 kB' 'Inactive(file): 3273012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8660192 kB' 'Mapped: 102668 kB' 'AnonPages: 699380 kB' 'Shmem: 5201724 kB' 'KernelStack: 11416 kB' 'PageTables: 5604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107552 kB' 'Slab: 353164 kB' 'SReclaimable: 107552 kB' 'SUnreclaim: 245612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659316 kB' 'MemFree: 23752948 kB' 'MemUsed: 3906368 kB' 'SwapCached: 0 kB' 'Active: 2109548 kB' 'Inactive: 152248 kB' 'Active(anon): 1907148 kB' 'Inactive(anon): 0 kB' 'Active(file): 202400 kB' 'Inactive(file): 152248 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2105784 kB' 'Mapped: 42504 kB' 'AnonPages: 156096 kB' 'Shmem: 1751136 kB' 'KernelStack: 7352 kB' 'PageTables: 2488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82952 kB' 'Slab: 212400 kB' 'SReclaimable: 82952 kB' 'SUnreclaim: 129448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.311 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:50.312 node0=512 expecting 512 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:50.312 node1=512 expecting 512 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:50.312 00:04:50.312 real 0m4.867s 00:04:50.312 user 0m1.844s 00:04:50.312 sys 0m3.089s 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.312 15:53:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:50.312 ************************************ 00:04:50.312 END TEST even_2G_alloc 00:04:50.312 ************************************ 00:04:50.312 15:53:08 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:50.312 15:53:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.312 15:53:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.312 15:53:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.573 ************************************ 00:04:50.573 START TEST odd_alloc 00:04:50.573 ************************************ 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.573 15:53:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:53.867 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.867 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.867 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:53.867 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:53.867 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:04:55.252 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:55.252 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.252 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.252 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.252 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.252 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.252 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44432160 kB' 'MemAvailable: 47843384 kB' 'Buffers: 11052 kB' 'Cached: 10755052 kB' 'SwapCached: 0 kB' 'Active: 8200596 kB' 'Inactive: 3425260 kB' 'Active(anon): 7812740 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 863112 kB' 'Mapped: 145384 kB' 'Shmem: 6952988 kB' 'KReclaimable: 190520 kB' 'Slab: 565120 kB' 'SReclaimable: 190520 kB' 'SUnreclaim: 374600 kB' 'KernelStack: 18800 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9596152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207720 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.253 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44432048 kB' 'MemAvailable: 47843272 kB' 'Buffers: 11052 kB' 'Cached: 10755056 kB' 'SwapCached: 0 kB' 'Active: 8201268 kB' 'Inactive: 3425260 kB' 'Active(anon): 7813412 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 863808 kB' 'Mapped: 145384 kB' 'Shmem: 6952992 kB' 'KReclaimable: 190520 kB' 'Slab: 565072 kB' 'SReclaimable: 190520 kB' 'SUnreclaim: 374552 kB' 'KernelStack: 18848 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9596536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207736 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.254 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.255 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44431228 kB' 'MemAvailable: 47842436 kB' 'Buffers: 11052 kB' 'Cached: 10755072 kB' 'SwapCached: 0 kB' 'Active: 8199836 kB' 'Inactive: 3425260 kB' 'Active(anon): 7811980 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 862336 kB' 'Mapped: 145300 kB' 'Shmem: 6953008 kB' 'KReclaimable: 190488 kB' 'Slab: 565156 kB' 'SReclaimable: 190488 kB' 'SUnreclaim: 374668 kB' 'KernelStack: 18848 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9597676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207784 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.256 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.257 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:55.258 nr_hugepages=1025 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.258 resv_hugepages=0 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.258 surplus_hugepages=0 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.258 anon_hugepages=0 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44429256 kB' 'MemAvailable: 47840464 kB' 'Buffers: 11052 kB' 'Cached: 10755092 kB' 'SwapCached: 0 kB' 'Active: 8200556 kB' 'Inactive: 3425260 kB' 'Active(anon): 7812700 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 863052 kB' 'Mapped: 145336 kB' 'Shmem: 6953028 kB' 'KReclaimable: 190488 kB' 'Slab: 565156 kB' 'SReclaimable: 190488 kB' 'SUnreclaim: 374668 kB' 'KernelStack: 19008 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486488 kB' 'Committed_AS: 9597696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207880 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.258 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.259 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 20690664 kB' 'MemUsed: 11944988 kB' 'SwapCached: 0 kB' 'Active: 6089552 kB' 'Inactive: 3273012 kB' 'Active(anon): 5904096 kB' 'Inactive(anon): 0 kB' 'Active(file): 185456 kB' 'Inactive(file): 3273012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8660344 kB' 'Mapped: 102796 kB' 'AnonPages: 705436 kB' 'Shmem: 5201876 kB' 'KernelStack: 11432 kB' 'PageTables: 5604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107568 kB' 'Slab: 353168 kB' 'SReclaimable: 107568 kB' 'SUnreclaim: 245600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.260 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.261 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659316 kB' 'MemFree: 23737448 kB' 'MemUsed: 3921868 kB' 'SwapCached: 0 kB' 'Active: 2110284 kB' 'Inactive: 152248 kB' 'Active(anon): 1907884 kB' 'Inactive(anon): 0 kB' 'Active(file): 202400 kB' 'Inactive(file): 152248 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2105824 kB' 'Mapped: 42540 kB' 'AnonPages: 156800 kB' 'Shmem: 1751176 kB' 'KernelStack: 7528 kB' 'PageTables: 2796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82920 kB' 'Slab: 211988 kB' 'SReclaimable: 82920 kB' 'SUnreclaim: 129068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.262 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:55.263 node0=512 expecting 513 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:55.263 node1=513 expecting 512 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:55.263 00:04:55.263 real 0m4.922s 00:04:55.263 user 0m1.890s 00:04:55.263 sys 0m3.034s 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.263 15:53:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.263 ************************************ 00:04:55.263 END TEST odd_alloc 00:04:55.263 ************************************ 00:04:55.525 15:53:13 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:55.525 15:53:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.525 15:53:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.525 15:53:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.525 ************************************ 00:04:55.525 START TEST custom_alloc 00:04:55.525 ************************************ 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.525 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.526 15:53:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:58.816 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.816 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.816 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:04:58.816 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:58.816 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43377656 kB' 'MemAvailable: 46788880 kB' 'Buffers: 11052 kB' 'Cached: 10755236 kB' 'SwapCached: 0 kB' 'Active: 8204228 kB' 'Inactive: 3425260 kB' 'Active(anon): 7816372 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 866536 kB' 'Mapped: 145404 kB' 'Shmem: 6953172 kB' 'KReclaimable: 190520 kB' 'Slab: 565688 kB' 'SReclaimable: 190520 kB' 'SUnreclaim: 375168 kB' 'KernelStack: 18768 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9595696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207816 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.199 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.200 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43377220 kB' 'MemAvailable: 46788444 kB' 'Buffers: 11052 kB' 'Cached: 10755236 kB' 'SwapCached: 0 kB' 'Active: 8204236 kB' 'Inactive: 3425260 kB' 'Active(anon): 7816380 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 866560 kB' 'Mapped: 145356 kB' 'Shmem: 6953172 kB' 'KReclaimable: 190520 kB' 'Slab: 565764 kB' 'SReclaimable: 190520 kB' 'SUnreclaim: 375244 kB' 'KernelStack: 18768 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9595712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207816 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.201 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.202 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.203 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43377320 kB' 'MemAvailable: 46788544 kB' 'Buffers: 11052 kB' 'Cached: 10755236 kB' 'SwapCached: 0 kB' 'Active: 8204236 kB' 'Inactive: 3425260 kB' 'Active(anon): 7816380 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 866560 kB' 'Mapped: 145356 kB' 'Shmem: 6953172 kB' 'KReclaimable: 190520 kB' 'Slab: 565764 kB' 'SReclaimable: 190520 kB' 'SUnreclaim: 375244 kB' 'KernelStack: 18768 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9595732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207816 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.204 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.205 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.206 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:00.207 nr_hugepages=1536 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.207 resv_hugepages=0 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.207 surplus_hugepages=0 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.207 anon_hugepages=0 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 43377644 kB' 'MemAvailable: 46788868 kB' 'Buffers: 11052 kB' 'Cached: 10755296 kB' 'SwapCached: 0 kB' 'Active: 8203952 kB' 'Inactive: 3425260 kB' 'Active(anon): 7816096 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 866180 kB' 'Mapped: 145356 kB' 'Shmem: 6953232 kB' 'KReclaimable: 190520 kB' 'Slab: 565764 kB' 'SReclaimable: 190520 kB' 'SUnreclaim: 375244 kB' 'KernelStack: 18736 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963224 kB' 'Committed_AS: 9595756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207816 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.207 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.208 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.209 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 20683480 kB' 'MemUsed: 11952172 kB' 'SwapCached: 0 kB' 'Active: 6092844 kB' 'Inactive: 3273012 kB' 'Active(anon): 5907388 kB' 'Inactive(anon): 0 kB' 'Active(file): 185456 kB' 'Inactive(file): 3273012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8660504 kB' 'Mapped: 102852 kB' 'AnonPages: 708496 kB' 'Shmem: 5202036 kB' 'KernelStack: 11352 kB' 'PageTables: 5464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107600 kB' 'Slab: 353208 kB' 'SReclaimable: 107600 kB' 'SUnreclaim: 245608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.210 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.211 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659316 kB' 'MemFree: 22693988 kB' 'MemUsed: 4965328 kB' 'SwapCached: 0 kB' 'Active: 2111140 kB' 'Inactive: 152248 kB' 'Active(anon): 1908740 kB' 'Inactive(anon): 0 kB' 'Active(file): 202400 kB' 'Inactive(file): 152248 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2105868 kB' 'Mapped: 42504 kB' 'AnonPages: 157684 kB' 'Shmem: 1751220 kB' 'KernelStack: 7384 kB' 'PageTables: 2664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82920 kB' 'Slab: 212556 kB' 'SReclaimable: 82920 kB' 'SUnreclaim: 129636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.212 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.213 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.214 node0=512 expecting 512 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:00.214 node1=1024 expecting 1024 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:00.214 00:05:00.214 real 0m4.862s 00:05:00.214 user 0m1.900s 00:05:00.214 sys 0m3.023s 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.214 15:53:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.214 ************************************ 00:05:00.214 END TEST custom_alloc 00:05:00.214 ************************************ 00:05:00.214 15:53:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:00.214 15:53:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.214 15:53:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.214 15:53:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.474 ************************************ 00:05:00.474 START TEST no_shrink_alloc 00:05:00.474 ************************************ 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.474 15:53:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:03.767 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.767 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.767 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:05:03.767 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.767 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.176 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44418228 kB' 'MemAvailable: 47829432 kB' 'Buffers: 11052 kB' 'Cached: 10755424 kB' 'SwapCached: 0 kB' 'Active: 8217072 kB' 'Inactive: 3425260 kB' 'Active(anon): 7829216 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 878948 kB' 'Mapped: 145496 kB' 'Shmem: 6953360 kB' 'KReclaimable: 190480 kB' 'Slab: 566020 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 375540 kB' 'KernelStack: 18928 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9597204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207848 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.177 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44418400 kB' 'MemAvailable: 47829604 kB' 'Buffers: 11052 kB' 'Cached: 10755428 kB' 'SwapCached: 0 kB' 'Active: 8216820 kB' 'Inactive: 3425260 kB' 'Active(anon): 7828964 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 878712 kB' 'Mapped: 145468 kB' 'Shmem: 6953364 kB' 'KReclaimable: 190480 kB' 'Slab: 566020 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 375540 kB' 'KernelStack: 18704 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9597224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207768 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.178 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.179 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.180 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44418952 kB' 'MemAvailable: 47830156 kB' 'Buffers: 11052 kB' 'Cached: 10755444 kB' 'SwapCached: 0 kB' 'Active: 8216040 kB' 'Inactive: 3425260 kB' 'Active(anon): 7828184 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 878448 kB' 'Mapped: 145424 kB' 'Shmem: 6953380 kB' 'KReclaimable: 190480 kB' 'Slab: 565924 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 375444 kB' 'KernelStack: 18736 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9597244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207768 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.181 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.182 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.183 nr_hugepages=1024 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.183 resv_hugepages=0 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.183 surplus_hugepages=0 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.183 anon_hugepages=0 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44419640 kB' 'MemAvailable: 47830844 kB' 'Buffers: 11052 kB' 'Cached: 10755468 kB' 'SwapCached: 0 kB' 'Active: 8216064 kB' 'Inactive: 3425260 kB' 'Active(anon): 7828208 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 878452 kB' 'Mapped: 145424 kB' 'Shmem: 6953404 kB' 'KReclaimable: 190480 kB' 'Slab: 565924 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 375444 kB' 'KernelStack: 18736 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9597268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207768 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.183 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.184 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 19628516 kB' 'MemUsed: 13007136 kB' 'SwapCached: 0 kB' 'Active: 6103472 kB' 'Inactive: 3273012 kB' 'Active(anon): 5918016 kB' 'Inactive(anon): 0 kB' 'Active(file): 185456 kB' 'Inactive(file): 3273012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8660644 kB' 'Mapped: 102920 kB' 'AnonPages: 719276 kB' 'Shmem: 5202176 kB' 'KernelStack: 11336 kB' 'PageTables: 5492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107600 kB' 'Slab: 353396 kB' 'SReclaimable: 107600 kB' 'SUnreclaim: 245796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.185 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.186 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.187 node0=1024 expecting 1024 00:05:05.187 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.187 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:05.187 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:05.187 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:05.187 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.187 15:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:08.593 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.593 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.593 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:05:08.593 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:08.593 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:05:10.045 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44397856 kB' 'MemAvailable: 47809060 kB' 'Buffers: 11052 kB' 'Cached: 10755584 kB' 'SwapCached: 0 kB' 'Active: 8218120 kB' 'Inactive: 3425260 kB' 'Active(anon): 7830264 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 880136 kB' 'Mapped: 145536 kB' 'Shmem: 6953520 kB' 'KReclaimable: 190480 kB' 'Slab: 566724 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 376244 kB' 'KernelStack: 18672 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9597956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207624 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.045 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.046 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44399800 kB' 'MemAvailable: 47811004 kB' 'Buffers: 11052 kB' 'Cached: 10755588 kB' 'SwapCached: 0 kB' 'Active: 8218768 kB' 'Inactive: 3425260 kB' 'Active(anon): 7830912 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 880808 kB' 'Mapped: 145464 kB' 'Shmem: 6953524 kB' 'KReclaimable: 190480 kB' 'Slab: 566848 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 376368 kB' 'KernelStack: 18736 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9597972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207592 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.047 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.048 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44400176 kB' 'MemAvailable: 47811380 kB' 'Buffers: 11052 kB' 'Cached: 10755604 kB' 'SwapCached: 0 kB' 'Active: 8218792 kB' 'Inactive: 3425260 kB' 'Active(anon): 7830936 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 880804 kB' 'Mapped: 145464 kB' 'Shmem: 6953540 kB' 'KReclaimable: 190480 kB' 'Slab: 566848 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 376368 kB' 'KernelStack: 18736 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9597996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207592 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.049 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.050 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.051 nr_hugepages=1024 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.051 resv_hugepages=0 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.051 surplus_hugepages=0 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.051 anon_hugepages=0 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294968 kB' 'MemFree: 44400176 kB' 'MemAvailable: 47811380 kB' 'Buffers: 11052 kB' 'Cached: 10755644 kB' 'SwapCached: 0 kB' 'Active: 8218468 kB' 'Inactive: 3425260 kB' 'Active(anon): 7830612 kB' 'Inactive(anon): 0 kB' 'Active(file): 387856 kB' 'Inactive(file): 3425260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 880388 kB' 'Mapped: 145464 kB' 'Shmem: 6953580 kB' 'KReclaimable: 190480 kB' 'Slab: 566848 kB' 'SReclaimable: 190480 kB' 'SUnreclaim: 376368 kB' 'KernelStack: 18720 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487512 kB' 'Committed_AS: 9598020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207592 kB' 'VmallocChunk: 0 kB' 'Percpu: 59520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 435156 kB' 'DirectMap2M: 9730048 kB' 'DirectMap1G: 58720256 kB' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.051 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.052 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 19616576 kB' 'MemUsed: 13019076 kB' 'SwapCached: 0 kB' 'Active: 6106996 kB' 'Inactive: 3273012 kB' 'Active(anon): 5921540 kB' 'Inactive(anon): 0 kB' 'Active(file): 185456 kB' 'Inactive(file): 3273012 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8660764 kB' 'Mapped: 102960 kB' 'AnonPages: 722396 kB' 'Shmem: 5202296 kB' 'KernelStack: 11336 kB' 'PageTables: 5536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107600 kB' 'Slab: 353436 kB' 'SReclaimable: 107600 kB' 'SUnreclaim: 245836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.053 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.054 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.055 node0=1024 expecting 1024 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.055 00:05:10.055 real 0m9.644s 00:05:10.055 user 0m3.711s 00:05:10.055 sys 0m6.077s 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.055 15:53:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:10.055 ************************************ 00:05:10.055 END TEST no_shrink_alloc 00:05:10.055 ************************************ 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:10.055 15:53:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:10.055 00:05:10.055 real 0m37.256s 00:05:10.055 user 0m13.464s 00:05:10.055 sys 0m22.023s 00:05:10.055 15:53:27 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.055 15:53:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.055 ************************************ 00:05:10.055 END TEST hugepages 00:05:10.055 ************************************ 00:05:10.055 15:53:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:10.055 15:53:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.055 15:53:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.055 15:53:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.055 ************************************ 00:05:10.055 START TEST driver 00:05:10.055 ************************************ 00:05:10.055 15:53:27 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:10.329 * Looking for test storage... 00:05:10.329 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:10.329 15:53:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:10.329 15:53:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.329 15:53:28 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.586 15:53:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:22.586 15:53:39 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.586 15:53:39 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.586 15:53:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:22.586 ************************************ 00:05:22.586 START TEST guess_driver 00:05:22.586 ************************************ 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 198 > 0 )) 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:22.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:22.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:22.586 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:22.586 Looking for driver=vfio-pci 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.586 15:53:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.124 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.125 15:53:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.033 15:53:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:27.033 15:53:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:27.033 15:53:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.033 15:53:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:27.033 15:53:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:27.033 15:53:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.293 15:53:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:27.293 15:53:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:27.293 15:53:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.293 15:53:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:27.293 15:53:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:27.293 15:53:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.201 15:53:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:29.201 15:53:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:29.201 15:53:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.201 15:53:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.415 00:05:41.415 real 0m19.067s 00:05:41.415 user 0m3.804s 00:05:41.415 sys 0m6.787s 00:05:41.415 15:53:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.415 15:53:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:41.415 ************************************ 00:05:41.415 END TEST guess_driver 00:05:41.415 ************************************ 00:05:41.415 00:05:41.415 real 0m30.599s 00:05:41.415 user 0m5.746s 00:05:41.415 sys 0m10.382s 00:05:41.415 15:53:58 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.415 15:53:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:41.415 ************************************ 00:05:41.415 END TEST driver 00:05:41.415 ************************************ 00:05:41.415 15:53:58 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:41.415 15:53:58 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.415 15:53:58 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.415 15:53:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.415 ************************************ 00:05:41.415 START TEST devices 00:05:41.415 ************************************ 00:05:41.415 15:53:58 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:41.415 * Looking for test storage... 00:05:41.415 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:41.415 15:53:58 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:41.415 15:53:58 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:41.415 15:53:58 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:41.415 15:53:58 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:46.691 15:54:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dd:00.0 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:46.691 15:54:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:46.691 15:54:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:46.691 No valid GPT data, bailing 00:05:46.691 15:54:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:46.691 15:54:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:46.691 15:54:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:46.691 15:54:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:46.691 15:54:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:46.691 15:54:03 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.691 15:54:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dd:00.0 00:05:46.692 15:54:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.692 15:54:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:46.692 15:54:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:46.692 15:54:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:df:00.0 00:05:46.692 15:54:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:05:46.692 15:54:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:46.692 15:54:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:46.692 15:54:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:05:46.692 No valid GPT data, bailing 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:df:00.0 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:de:00.0 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:05:46.692 No valid GPT data, bailing 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:de:00.0 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dc:00.0 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme3n1 00:05:46.692 No valid GPT data, bailing 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:46.692 15:54:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:46.692 15:54:04 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dc:00.0 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:46.692 15:54:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:46.692 15:54:04 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.692 15:54:04 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.692 15:54:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:46.692 ************************************ 00:05:46.692 START TEST nvme_mount 00:05:46.692 ************************************ 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:46.692 15:54:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:47.264 Creating new GPT entries in memory. 00:05:47.264 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:47.264 other utilities. 00:05:47.264 15:54:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:47.264 15:54:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.264 15:54:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.264 15:54:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.264 15:54:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:48.649 Creating new GPT entries in memory. 00:05:48.649 The operation has completed successfully. 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 129780 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.649 15:54:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.945 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.946 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.946 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.946 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.946 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.946 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:51.946 15:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:53.324 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:53.324 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:53.584 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:53.584 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:53.584 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:53.584 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:53.584 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:53.584 15:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:53.584 15:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:53.584 15:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:53.584 15:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.844 15:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:05:57.141 15:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:dd:00.0 data@nvme0n1 '' '' 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:58.521 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:58.522 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.522 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:05:58.522 15:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:58.522 15:54:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.522 15:54:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.817 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:01.818 15:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:03.725 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:03.725 00:06:03.725 real 0m17.113s 00:06:03.725 user 0m5.422s 00:06:03.725 sys 0m9.378s 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.725 15:54:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:03.725 ************************************ 00:06:03.725 END TEST nvme_mount 00:06:03.725 ************************************ 00:06:03.725 15:54:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:03.725 15:54:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.725 15:54:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.725 15:54:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:03.725 ************************************ 00:06:03.725 START TEST dm_mount 00:06:03.725 ************************************ 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:03.725 15:54:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:04.664 Creating new GPT entries in memory. 00:06:04.664 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:04.664 other utilities. 00:06:04.664 15:54:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:04.664 15:54:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:04.664 15:54:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:04.664 15:54:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:04.664 15:54:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:05.653 Creating new GPT entries in memory. 00:06:05.653 The operation has completed successfully. 00:06:05.653 15:54:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:05.653 15:54:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:05.653 15:54:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:05.653 15:54:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:05.653 15:54:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:06.588 The operation has completed successfully. 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 135818 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:06.588 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:dd:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.847 15:54:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.139 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:10.140 15:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:dd:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.518 15:54:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:06:14.810 15:54:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:16.715 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:16.715 00:06:16.715 real 0m12.949s 00:06:16.715 user 0m3.617s 00:06:16.715 sys 0m6.235s 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.715 15:54:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:16.715 ************************************ 00:06:16.715 END TEST dm_mount 00:06:16.715 ************************************ 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:16.715 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:16.715 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:06:16.715 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:16.715 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:16.715 15:54:34 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:16.715 00:06:16.715 real 0m36.024s 00:06:16.715 user 0m11.200s 00:06:16.715 sys 0m19.265s 00:06:16.715 15:54:34 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.715 15:54:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:16.715 ************************************ 00:06:16.715 END TEST devices 00:06:16.715 ************************************ 00:06:16.715 00:06:16.715 real 2m24.397s 00:06:16.715 user 0m41.508s 00:06:16.715 sys 1m11.363s 00:06:16.715 15:54:34 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.715 15:54:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:16.715 ************************************ 00:06:16.715 END TEST setup.sh 00:06:16.715 ************************************ 00:06:16.975 15:54:34 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:20.270 Hugepages 00:06:20.270 node hugesize free / total 00:06:20.270 node0 1048576kB 0 / 0 00:06:20.270 node0 2048kB 2048 / 2048 00:06:20.270 node1 1048576kB 0 / 0 00:06:20.270 node1 2048kB 0 / 0 00:06:20.270 00:06:20.270 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:20.270 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:20.270 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:20.530 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme3 nvme3n1 00:06:20.530 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:20.530 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:06:20.790 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:06:20.790 15:54:38 -- spdk/autotest.sh@130 -- # uname -s 00:06:20.790 15:54:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:20.790 15:54:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:20.790 15:54:38 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:24.086 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:24.086 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:24.086 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:24.086 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:24.345 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:26.254 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:06:26.254 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:06:26.254 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:06:26.513 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:06:27.890 15:54:45 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:28.829 15:54:46 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:28.829 15:54:46 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:28.829 15:54:46 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:28.829 15:54:46 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:28.829 15:54:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:28.829 15:54:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:28.829 15:54:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:28.829 15:54:46 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:28.829 15:54:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:28.829 15:54:46 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:28.829 15:54:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:06:28.829 15:54:46 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:32.125 Waiting for block devices as requested 00:06:32.125 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:06:32.385 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:06:32.385 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:06:34.925 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:35.183 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:35.183 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:35.183 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:35.443 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:35.443 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:35.443 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:35.702 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:35.702 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:35.702 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:35.702 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:35.962 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:35.962 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:35.962 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:36.222 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:36.222 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:36.222 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:06:40.414 15:54:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:40.414 15:54:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dc:00.0 00:06:40.414 15:54:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.414 15:54:58 -- common/autotest_common.sh@1502 -- # grep 0000:dc:00.0/nvme/nvme 00:06:40.414 15:54:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:06:40.414 15:54:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 ]] 00:06:40.414 15:54:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:06:40.414 15:54:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:40.414 15:54:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:40.414 15:54:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:40.414 15:54:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:40.414 15:54:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:40.414 15:54:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:06:40.673 15:54:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:40.673 15:54:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dd:00.0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # grep 0000:dd:00.0/nvme/nvme 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:40.673 15:54:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:40.673 15:54:58 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:40.673 15:54:58 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1557 -- # continue 00:06:40.673 15:54:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:40.673 15:54:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:de:00.0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # grep 0000:de:00.0/nvme/nvme 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:06:40.673 15:54:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:40.673 15:54:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:df:00.0 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # grep 0000:df:00.0/nvme/nvme 00:06:40.673 15:54:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:06:40.673 15:54:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:06:40.673 15:54:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:40.673 15:54:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:40.673 15:54:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:40.673 15:54:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:40.673 15:54:58 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:40.673 15:54:58 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:40.673 15:54:58 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:40.673 15:54:58 -- common/autotest_common.sh@1557 -- # continue 00:06:40.673 15:54:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:40.673 15:54:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.673 15:54:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 15:54:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:40.673 15:54:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.673 15:54:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 15:54:58 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:44.871 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:44.871 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:45.809 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:06:46.069 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:06:46.328 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:06:46.328 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:06:47.705 15:55:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:47.705 15:55:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.705 15:55:05 -- common/autotest_common.sh@10 -- # set +x 00:06:47.705 15:55:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:47.705 15:55:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:47.964 15:55:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:47.964 15:55:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:47.964 15:55:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:47.964 15:55:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:47.964 15:55:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:47.964 15:55:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:47.964 15:55:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:47.964 15:55:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:47.964 15:55:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:47.964 15:55:05 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:47.964 15:55:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:06:47.964 15:55:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dc:00.0/device 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # device=0x0953 00:06:47.964 15:55:05 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:06:47.964 15:55:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dd:00.0/device 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:47.964 15:55:05 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:47.964 15:55:05 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:47.964 15:55:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:de:00.0/device 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # device=0x0953 00:06:47.964 15:55:05 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:06:47.964 15:55:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:df:00.0/device 00:06:47.964 15:55:05 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:47.964 15:55:05 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:47.964 15:55:05 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:47.965 15:55:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:dd:00.0 0000:df:00.0 00:06:47.965 15:55:05 -- common/autotest_common.sh@1592 -- # [[ -z 0000:dd:00.0 ]] 00:06:47.965 15:55:05 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=148641 00:06:47.965 15:55:05 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:47.965 15:55:05 -- common/autotest_common.sh@1598 -- # waitforlisten 148641 00:06:47.965 15:55:05 -- common/autotest_common.sh@831 -- # '[' -z 148641 ']' 00:06:47.965 15:55:05 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.965 15:55:05 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.965 15:55:05 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.965 15:55:05 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.965 15:55:05 -- common/autotest_common.sh@10 -- # set +x 00:06:47.965 [2024-07-25 15:55:05.849264] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:06:47.965 [2024-07-25 15:55:05.849306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148641 ] 00:06:47.965 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.965 [2024-07-25 15:55:05.915418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.224 [2024-07-25 15:55:05.989216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.791 15:55:06 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.792 15:55:06 -- common/autotest_common.sh@864 -- # return 0 00:06:48.792 15:55:06 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:48.792 15:55:06 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:48.792 15:55:06 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:dd:00.0 00:06:52.084 nvme0n1 00:06:52.084 15:55:09 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:52.084 [2024-07-25 15:55:09.820606] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:52.084 request: 00:06:52.084 { 00:06:52.084 "nvme_ctrlr_name": "nvme0", 00:06:52.084 "password": "test", 00:06:52.084 "method": "bdev_nvme_opal_revert", 00:06:52.084 "req_id": 1 00:06:52.084 } 00:06:52.084 Got JSON-RPC error response 00:06:52.084 response: 00:06:52.084 { 00:06:52.084 "code": -32602, 00:06:52.084 "message": "Invalid parameters" 00:06:52.084 } 00:06:52.084 15:55:09 -- common/autotest_common.sh@1604 -- # true 00:06:52.084 15:55:09 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:52.084 15:55:09 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:52.084 15:55:09 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:df:00.0 00:06:55.376 nvme1n1 00:06:55.376 15:55:12 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:06:55.376 [2024-07-25 15:55:13.015668] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:06:55.376 request: 00:06:55.376 { 00:06:55.376 "nvme_ctrlr_name": "nvme1", 00:06:55.376 "password": "test", 00:06:55.376 "method": "bdev_nvme_opal_revert", 00:06:55.376 "req_id": 1 00:06:55.376 } 00:06:55.376 Got JSON-RPC error response 00:06:55.376 response: 00:06:55.376 { 00:06:55.376 "code": -32602, 00:06:55.376 "message": "Invalid parameters" 00:06:55.376 } 00:06:55.376 15:55:13 -- common/autotest_common.sh@1604 -- # true 00:06:55.376 15:55:13 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:55.376 15:55:13 -- common/autotest_common.sh@1608 -- # killprocess 148641 00:06:55.376 15:55:13 -- common/autotest_common.sh@950 -- # '[' -z 148641 ']' 00:06:55.376 15:55:13 -- common/autotest_common.sh@954 -- # kill -0 148641 00:06:55.376 15:55:13 -- common/autotest_common.sh@955 -- # uname 00:06:55.376 15:55:13 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.376 15:55:13 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 148641 00:06:55.376 15:55:13 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.376 15:55:13 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.376 15:55:13 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 148641' 00:06:55.376 killing process with pid 148641 00:06:55.376 15:55:13 -- common/autotest_common.sh@969 -- # kill 148641 00:06:55.376 15:55:13 -- common/autotest_common.sh@974 -- # wait 148641 00:06:58.667 15:55:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:58.667 15:55:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:58.667 15:55:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:58.667 15:55:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:58.667 15:55:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:58.667 15:55:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.667 15:55:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.667 15:55:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:58.667 15:55:16 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:58.667 15:55:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.667 15:55:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.667 15:55:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.667 ************************************ 00:06:58.667 START TEST env 00:06:58.667 ************************************ 00:06:58.667 15:55:16 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:58.667 * Looking for test storage... 00:06:58.667 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:58.667 15:55:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:58.667 15:55:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.667 15:55:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.667 15:55:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:58.667 ************************************ 00:06:58.667 START TEST env_memory 00:06:58.667 ************************************ 00:06:58.667 15:55:16 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:58.667 00:06:58.667 00:06:58.667 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.667 http://cunit.sourceforge.net/ 00:06:58.667 00:06:58.667 00:06:58.667 Suite: memory 00:06:58.667 Test: alloc and free memory map ...[2024-07-25 15:55:16.183658] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:58.667 passed 00:06:58.667 Test: mem map translation ...[2024-07-25 15:55:16.196983] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:58.667 [2024-07-25 15:55:16.196997] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:58.667 [2024-07-25 15:55:16.197030] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:58.667 [2024-07-25 15:55:16.197037] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:58.667 passed 00:06:58.667 Test: mem map registration ...[2024-07-25 15:55:16.219128] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:58.667 [2024-07-25 15:55:16.219142] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:58.667 passed 00:06:58.667 Test: mem map adjacent registrations ...passed 00:06:58.667 00:06:58.667 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.667 suites 1 1 n/a 0 0 00:06:58.667 tests 4 4 4 0 0 00:06:58.667 asserts 152 152 152 0 n/a 00:06:58.667 00:06:58.667 Elapsed time = 0.078 seconds 00:06:58.667 00:06:58.667 real 0m0.085s 00:06:58.667 user 0m0.080s 00:06:58.667 sys 0m0.005s 00:06:58.667 15:55:16 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.667 15:55:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:58.667 ************************************ 00:06:58.667 END TEST env_memory 00:06:58.667 ************************************ 00:06:58.667 15:55:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:58.667 15:55:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.667 15:55:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.667 15:55:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:58.667 ************************************ 00:06:58.667 START TEST env_vtophys 00:06:58.667 ************************************ 00:06:58.667 15:55:16 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:58.667 EAL: lib.eal log level changed from notice to debug 00:06:58.667 EAL: Detected lcore 0 as core 0 on socket 0 00:06:58.667 EAL: Detected lcore 1 as core 1 on socket 0 00:06:58.667 EAL: Detected lcore 2 as core 2 on socket 0 00:06:58.667 EAL: Detected lcore 3 as core 3 on socket 0 00:06:58.667 EAL: Detected lcore 4 as core 4 on socket 0 00:06:58.667 EAL: Detected lcore 5 as core 5 on socket 0 00:06:58.667 EAL: Detected lcore 6 as core 8 on socket 0 00:06:58.667 EAL: Detected lcore 7 as core 9 on socket 0 00:06:58.667 EAL: Detected lcore 8 as core 10 on socket 0 00:06:58.667 EAL: Detected lcore 9 as core 11 on socket 0 00:06:58.667 EAL: Detected lcore 10 as core 12 on socket 0 00:06:58.667 EAL: Detected lcore 11 as core 16 on socket 0 00:06:58.667 EAL: Detected lcore 12 as core 17 on socket 0 00:06:58.667 EAL: Detected lcore 13 as core 18 on socket 0 00:06:58.667 EAL: Detected lcore 14 as core 19 on socket 0 00:06:58.667 EAL: Detected lcore 15 as core 20 on socket 0 00:06:58.667 EAL: Detected lcore 16 as core 21 on socket 0 00:06:58.667 EAL: Detected lcore 17 as core 24 on socket 0 00:06:58.667 EAL: Detected lcore 18 as core 25 on socket 0 00:06:58.667 EAL: Detected lcore 19 as core 26 on socket 0 00:06:58.667 EAL: Detected lcore 20 as core 27 on socket 0 00:06:58.667 EAL: Detected lcore 21 as core 28 on socket 0 00:06:58.667 EAL: Detected lcore 22 as core 0 on socket 1 00:06:58.667 EAL: Detected lcore 23 as core 1 on socket 1 00:06:58.667 EAL: Detected lcore 24 as core 2 on socket 1 00:06:58.667 EAL: Detected lcore 25 as core 3 on socket 1 00:06:58.667 EAL: Detected lcore 26 as core 4 on socket 1 00:06:58.667 EAL: Detected lcore 27 as core 5 on socket 1 00:06:58.667 EAL: Detected lcore 28 as core 8 on socket 1 00:06:58.667 EAL: Detected lcore 29 as core 9 on socket 1 00:06:58.667 EAL: Detected lcore 30 as core 10 on socket 1 00:06:58.667 EAL: Detected lcore 31 as core 11 on socket 1 00:06:58.667 EAL: Detected lcore 32 as core 12 on socket 1 00:06:58.667 EAL: Detected lcore 33 as core 16 on socket 1 00:06:58.667 EAL: Detected lcore 34 as core 17 on socket 1 00:06:58.667 EAL: Detected lcore 35 as core 18 on socket 1 00:06:58.667 EAL: Detected lcore 36 as core 19 on socket 1 00:06:58.667 EAL: Detected lcore 37 as core 20 on socket 1 00:06:58.667 EAL: Detected lcore 38 as core 21 on socket 1 00:06:58.667 EAL: Detected lcore 39 as core 24 on socket 1 00:06:58.667 EAL: Detected lcore 40 as core 25 on socket 1 00:06:58.667 EAL: Detected lcore 41 as core 26 on socket 1 00:06:58.667 EAL: Detected lcore 42 as core 27 on socket 1 00:06:58.667 EAL: Detected lcore 43 as core 28 on socket 1 00:06:58.667 EAL: Detected lcore 44 as core 0 on socket 0 00:06:58.667 EAL: Detected lcore 45 as core 1 on socket 0 00:06:58.667 EAL: Detected lcore 46 as core 2 on socket 0 00:06:58.667 EAL: Detected lcore 47 as core 3 on socket 0 00:06:58.667 EAL: Detected lcore 48 as core 4 on socket 0 00:06:58.667 EAL: Detected lcore 49 as core 5 on socket 0 00:06:58.667 EAL: Detected lcore 50 as core 8 on socket 0 00:06:58.667 EAL: Detected lcore 51 as core 9 on socket 0 00:06:58.667 EAL: Detected lcore 52 as core 10 on socket 0 00:06:58.667 EAL: Detected lcore 53 as core 11 on socket 0 00:06:58.667 EAL: Detected lcore 54 as core 12 on socket 0 00:06:58.667 EAL: Detected lcore 55 as core 16 on socket 0 00:06:58.667 EAL: Detected lcore 56 as core 17 on socket 0 00:06:58.667 EAL: Detected lcore 57 as core 18 on socket 0 00:06:58.667 EAL: Detected lcore 58 as core 19 on socket 0 00:06:58.667 EAL: Detected lcore 59 as core 20 on socket 0 00:06:58.667 EAL: Detected lcore 60 as core 21 on socket 0 00:06:58.667 EAL: Detected lcore 61 as core 24 on socket 0 00:06:58.667 EAL: Detected lcore 62 as core 25 on socket 0 00:06:58.667 EAL: Detected lcore 63 as core 26 on socket 0 00:06:58.667 EAL: Detected lcore 64 as core 27 on socket 0 00:06:58.667 EAL: Detected lcore 65 as core 28 on socket 0 00:06:58.667 EAL: Detected lcore 66 as core 0 on socket 1 00:06:58.667 EAL: Detected lcore 67 as core 1 on socket 1 00:06:58.667 EAL: Detected lcore 68 as core 2 on socket 1 00:06:58.667 EAL: Detected lcore 69 as core 3 on socket 1 00:06:58.667 EAL: Detected lcore 70 as core 4 on socket 1 00:06:58.667 EAL: Detected lcore 71 as core 5 on socket 1 00:06:58.667 EAL: Detected lcore 72 as core 8 on socket 1 00:06:58.667 EAL: Detected lcore 73 as core 9 on socket 1 00:06:58.668 EAL: Detected lcore 74 as core 10 on socket 1 00:06:58.668 EAL: Detected lcore 75 as core 11 on socket 1 00:06:58.668 EAL: Detected lcore 76 as core 12 on socket 1 00:06:58.668 EAL: Detected lcore 77 as core 16 on socket 1 00:06:58.668 EAL: Detected lcore 78 as core 17 on socket 1 00:06:58.668 EAL: Detected lcore 79 as core 18 on socket 1 00:06:58.668 EAL: Detected lcore 80 as core 19 on socket 1 00:06:58.668 EAL: Detected lcore 81 as core 20 on socket 1 00:06:58.668 EAL: Detected lcore 82 as core 21 on socket 1 00:06:58.668 EAL: Detected lcore 83 as core 24 on socket 1 00:06:58.668 EAL: Detected lcore 84 as core 25 on socket 1 00:06:58.668 EAL: Detected lcore 85 as core 26 on socket 1 00:06:58.668 EAL: Detected lcore 86 as core 27 on socket 1 00:06:58.668 EAL: Detected lcore 87 as core 28 on socket 1 00:06:58.668 EAL: Maximum logical cores by configuration: 128 00:06:58.668 EAL: Detected CPU lcores: 88 00:06:58.668 EAL: Detected NUMA nodes: 2 00:06:58.668 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:58.668 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:58.668 EAL: Checking presence of .so 'librte_eal.so' 00:06:58.668 EAL: Detected static linkage of DPDK 00:06:58.668 EAL: No shared files mode enabled, IPC will be disabled 00:06:58.668 EAL: Bus pci wants IOVA as 'DC' 00:06:58.668 EAL: Buses did not request a specific IOVA mode. 00:06:58.668 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:58.668 EAL: Selected IOVA mode 'VA' 00:06:58.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.668 EAL: Probing VFIO support... 00:06:58.668 EAL: IOMMU type 1 (Type 1) is supported 00:06:58.668 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:58.668 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:58.668 EAL: VFIO support initialized 00:06:58.668 EAL: Ask a virtual area of 0x2e000 bytes 00:06:58.668 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:58.668 EAL: Setting up physically contiguous memory... 00:06:58.668 EAL: Setting maximum number of open files to 524288 00:06:58.668 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:58.668 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:58.668 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:58.668 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:58.668 EAL: Ask a virtual area of 0x61000 bytes 00:06:58.668 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:58.668 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:58.668 EAL: Ask a virtual area of 0x400000000 bytes 00:06:58.668 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:58.668 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:58.668 EAL: Hugepages will be freed exactly as allocated. 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: TSC frequency is ~2100000 KHz 00:06:58.668 EAL: Main lcore 0 is ready (tid=7efddf8fba00;cpuset=[0]) 00:06:58.668 EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 0 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was expanded by 2MB 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Mem event callback 'spdk:(nil)' registered 00:06:58.668 00:06:58.668 00:06:58.668 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.668 http://cunit.sourceforge.net/ 00:06:58.668 00:06:58.668 00:06:58.668 Suite: components_suite 00:06:58.668 Test: vtophys_malloc_test ...passed 00:06:58.668 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 4 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was expanded by 4MB 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was shrunk by 4MB 00:06:58.668 EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 4 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was expanded by 6MB 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was shrunk by 6MB 00:06:58.668 EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 4 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was expanded by 10MB 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was shrunk by 10MB 00:06:58.668 EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 4 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was expanded by 18MB 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was shrunk by 18MB 00:06:58.668 EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 4 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was expanded by 34MB 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was shrunk by 34MB 00:06:58.668 EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 4 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.669 EAL: No shared files mode enabled, IPC is disabled 00:06:58.669 EAL: Heap on socket 0 was expanded by 66MB 00:06:58.669 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.669 EAL: request: mp_malloc_sync 00:06:58.669 EAL: No shared files mode enabled, IPC is disabled 00:06:58.669 EAL: Heap on socket 0 was shrunk by 66MB 00:06:58.669 EAL: Trying to obtain current memory policy. 00:06:58.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.669 EAL: Restoring previous memory policy: 4 00:06:58.669 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.669 EAL: request: mp_malloc_sync 00:06:58.669 EAL: No shared files mode enabled, IPC is disabled 00:06:58.669 EAL: Heap on socket 0 was expanded by 130MB 00:06:58.669 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.669 EAL: request: mp_malloc_sync 00:06:58.669 EAL: No shared files mode enabled, IPC is disabled 00:06:58.669 EAL: Heap on socket 0 was shrunk by 130MB 00:06:58.669 EAL: Trying to obtain current memory policy. 00:06:58.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.669 EAL: Restoring previous memory policy: 4 00:06:58.669 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.669 EAL: request: mp_malloc_sync 00:06:58.669 EAL: No shared files mode enabled, IPC is disabled 00:06:58.669 EAL: Heap on socket 0 was expanded by 258MB 00:06:58.669 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.669 EAL: request: mp_malloc_sync 00:06:58.669 EAL: No shared files mode enabled, IPC is disabled 00:06:58.669 EAL: Heap on socket 0 was shrunk by 258MB 00:06:58.669 EAL: Trying to obtain current memory policy. 00:06:58.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.928 EAL: Restoring previous memory policy: 4 00:06:58.928 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.928 EAL: request: mp_malloc_sync 00:06:58.928 EAL: No shared files mode enabled, IPC is disabled 00:06:58.928 EAL: Heap on socket 0 was expanded by 514MB 00:06:58.928 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.928 EAL: request: mp_malloc_sync 00:06:58.928 EAL: No shared files mode enabled, IPC is disabled 00:06:58.928 EAL: Heap on socket 0 was shrunk by 514MB 00:06:58.928 EAL: Trying to obtain current memory policy. 00:06:58.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.187 EAL: Restoring previous memory policy: 4 00:06:59.187 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.187 EAL: request: mp_malloc_sync 00:06:59.187 EAL: No shared files mode enabled, IPC is disabled 00:06:59.187 EAL: Heap on socket 0 was expanded by 1026MB 00:06:59.447 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.447 EAL: request: mp_malloc_sync 00:06:59.447 EAL: No shared files mode enabled, IPC is disabled 00:06:59.447 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:59.447 passed 00:06:59.447 00:06:59.447 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.447 suites 1 1 n/a 0 0 00:06:59.447 tests 2 2 2 0 0 00:06:59.447 asserts 497 497 497 0 n/a 00:06:59.447 00:06:59.447 Elapsed time = 0.959 seconds 00:06:59.447 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.447 EAL: request: mp_malloc_sync 00:06:59.447 EAL: No shared files mode enabled, IPC is disabled 00:06:59.447 EAL: Heap on socket 0 was shrunk by 2MB 00:06:59.447 EAL: No shared files mode enabled, IPC is disabled 00:06:59.447 EAL: No shared files mode enabled, IPC is disabled 00:06:59.447 EAL: No shared files mode enabled, IPC is disabled 00:06:59.447 00:06:59.447 real 0m1.076s 00:06:59.447 user 0m0.625s 00:06:59.447 sys 0m0.425s 00:06:59.447 15:55:17 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.447 15:55:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:59.447 ************************************ 00:06:59.447 END TEST env_vtophys 00:06:59.447 ************************************ 00:06:59.447 15:55:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:59.447 15:55:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.447 15:55:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.447 15:55:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.707 ************************************ 00:06:59.707 START TEST env_pci 00:06:59.707 ************************************ 00:06:59.707 15:55:17 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:59.707 00:06:59.707 00:06:59.707 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.707 http://cunit.sourceforge.net/ 00:06:59.707 00:06:59.707 00:06:59.707 Suite: pci 00:06:59.707 Test: pci_hook ...[2024-07-25 15:55:17.468378] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 150508 has claimed it 00:06:59.707 EAL: Cannot find device (10000:00:01.0) 00:06:59.707 EAL: Failed to attach device on primary process 00:06:59.707 passed 00:06:59.707 00:06:59.707 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.707 suites 1 1 n/a 0 0 00:06:59.707 tests 1 1 1 0 0 00:06:59.707 asserts 25 25 25 0 n/a 00:06:59.707 00:06:59.707 Elapsed time = 0.028 seconds 00:06:59.707 00:06:59.707 real 0m0.044s 00:06:59.707 user 0m0.012s 00:06:59.707 sys 0m0.032s 00:06:59.707 15:55:17 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.707 15:55:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:59.707 ************************************ 00:06:59.707 END TEST env_pci 00:06:59.707 ************************************ 00:06:59.707 15:55:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:59.707 15:55:17 env -- env/env.sh@15 -- # uname 00:06:59.707 15:55:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:59.707 15:55:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:59.707 15:55:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:59.707 15:55:17 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:59.707 15:55:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.707 15:55:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.707 ************************************ 00:06:59.707 START TEST env_dpdk_post_init 00:06:59.707 ************************************ 00:06:59.707 15:55:17 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:59.707 EAL: Detected CPU lcores: 88 00:06:59.707 EAL: Detected NUMA nodes: 2 00:06:59.707 EAL: Detected static linkage of DPDK 00:06:59.707 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:59.707 EAL: Selected IOVA mode 'VA' 00:06:59.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.707 EAL: VFIO support initialized 00:06:59.707 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:59.707 EAL: Using IOMMU type 1 (Type 1) 00:07:00.975 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:dc:00.0 (socket 1) 00:07:01.915 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:dd:00.0 (socket 1) 00:07:02.833 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:de:00.0 (socket 1) 00:07:03.772 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:df:00.0 (socket 1) 00:07:07.963 EAL: Releasing PCI mapped resource for 0000:df:00.0 00:07:07.963 EAL: Calling pci_unmap_resource for 0000:df:00.0 at 0x20200100c000 00:07:07.963 EAL: Releasing PCI mapped resource for 0000:dc:00.0 00:07:07.963 EAL: Calling pci_unmap_resource for 0000:dc:00.0 at 0x202001000000 00:07:08.222 EAL: Releasing PCI mapped resource for 0000:de:00.0 00:07:08.222 EAL: Calling pci_unmap_resource for 0000:de:00.0 at 0x202001008000 00:07:08.790 EAL: Releasing PCI mapped resource for 0000:dd:00.0 00:07:08.790 EAL: Calling pci_unmap_resource for 0000:dd:00.0 at 0x202001004000 00:07:09.049 Starting DPDK initialization... 00:07:09.049 Starting SPDK post initialization... 00:07:09.049 SPDK NVMe probe 00:07:09.049 Attaching to 0000:dc:00.0 00:07:09.049 Attaching to 0000:dd:00.0 00:07:09.049 Attaching to 0000:de:00.0 00:07:09.049 Attaching to 0000:df:00.0 00:07:09.049 Attached to 0000:dc:00.0 00:07:09.049 Attached to 0000:de:00.0 00:07:09.049 Attached to 0000:df:00.0 00:07:09.049 Attached to 0000:dd:00.0 00:07:09.049 Cleaning up... 00:07:09.049 00:07:09.049 real 0m9.284s 00:07:09.049 user 0m5.954s 00:07:09.049 sys 0m0.371s 00:07:09.049 15:55:26 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.049 15:55:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 ************************************ 00:07:09.049 END TEST env_dpdk_post_init 00:07:09.049 ************************************ 00:07:09.049 15:55:26 env -- env/env.sh@26 -- # uname 00:07:09.049 15:55:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:09.049 15:55:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:09.049 15:55:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.049 15:55:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.049 15:55:26 env -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 ************************************ 00:07:09.049 START TEST env_mem_callbacks 00:07:09.049 ************************************ 00:07:09.049 15:55:26 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:09.049 EAL: Detected CPU lcores: 88 00:07:09.049 EAL: Detected NUMA nodes: 2 00:07:09.049 EAL: Detected static linkage of DPDK 00:07:09.049 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:09.049 EAL: Selected IOVA mode 'VA' 00:07:09.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.049 EAL: VFIO support initialized 00:07:09.049 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:09.049 00:07:09.049 00:07:09.049 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.049 http://cunit.sourceforge.net/ 00:07:09.049 00:07:09.049 00:07:09.049 Suite: memory 00:07:09.049 Test: test ... 00:07:09.049 register 0x200000200000 2097152 00:07:09.049 malloc 3145728 00:07:09.049 register 0x200000400000 4194304 00:07:09.049 buf 0x200000500000 len 3145728 PASSED 00:07:09.049 malloc 64 00:07:09.049 buf 0x2000004fff40 len 64 PASSED 00:07:09.049 malloc 4194304 00:07:09.049 register 0x200000800000 6291456 00:07:09.049 buf 0x200000a00000 len 4194304 PASSED 00:07:09.049 free 0x200000500000 3145728 00:07:09.049 free 0x2000004fff40 64 00:07:09.049 unregister 0x200000400000 4194304 PASSED 00:07:09.049 free 0x200000a00000 4194304 00:07:09.049 unregister 0x200000800000 6291456 PASSED 00:07:09.049 malloc 8388608 00:07:09.049 register 0x200000400000 10485760 00:07:09.049 buf 0x200000600000 len 8388608 PASSED 00:07:09.049 free 0x200000600000 8388608 00:07:09.049 unregister 0x200000400000 10485760 PASSED 00:07:09.049 passed 00:07:09.049 00:07:09.049 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.049 suites 1 1 n/a 0 0 00:07:09.049 tests 1 1 1 0 0 00:07:09.049 asserts 15 15 15 0 n/a 00:07:09.049 00:07:09.049 Elapsed time = 0.008 seconds 00:07:09.049 00:07:09.049 real 0m0.057s 00:07:09.049 user 0m0.017s 00:07:09.049 sys 0m0.040s 00:07:09.049 15:55:26 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.049 15:55:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 ************************************ 00:07:09.049 END TEST env_mem_callbacks 00:07:09.049 ************************************ 00:07:09.049 00:07:09.049 real 0m10.967s 00:07:09.049 user 0m6.861s 00:07:09.049 sys 0m1.150s 00:07:09.049 15:55:27 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.049 15:55:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 ************************************ 00:07:09.049 END TEST env 00:07:09.049 ************************************ 00:07:09.308 15:55:27 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:07:09.308 15:55:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.308 15:55:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.308 15:55:27 -- common/autotest_common.sh@10 -- # set +x 00:07:09.308 ************************************ 00:07:09.308 START TEST rpc 00:07:09.308 ************************************ 00:07:09.308 15:55:27 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:07:09.308 * Looking for test storage... 00:07:09.308 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:09.308 15:55:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=152118 00:07:09.308 15:55:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.308 15:55:27 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:09.308 15:55:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 152118 00:07:09.308 15:55:27 rpc -- common/autotest_common.sh@831 -- # '[' -z 152118 ']' 00:07:09.308 15:55:27 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.308 15:55:27 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.308 15:55:27 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.308 15:55:27 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.308 15:55:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.308 [2024-07-25 15:55:27.191961] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:09.308 [2024-07-25 15:55:27.192033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152118 ] 00:07:09.308 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.308 [2024-07-25 15:55:27.264230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.567 [2024-07-25 15:55:27.341359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:09.567 [2024-07-25 15:55:27.341397] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 152118' to capture a snapshot of events at runtime. 00:07:09.567 [2024-07-25 15:55:27.341403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.567 [2024-07-25 15:55:27.341409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.567 [2024-07-25 15:55:27.341413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid152118 for offline analysis/debug. 00:07:09.567 [2024-07-25 15:55:27.341433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.135 15:55:28 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.135 15:55:28 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.135 15:55:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:10.135 15:55:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:10.135 15:55:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:10.135 15:55:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:10.135 15:55:28 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.135 15:55:28 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.135 15:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 ************************************ 00:07:10.135 START TEST rpc_integrity 00:07:10.135 ************************************ 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:10.135 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:10.135 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:10.135 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:10.135 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.135 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:10.135 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.135 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.394 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.394 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:10.394 { 00:07:10.394 "name": "Malloc0", 00:07:10.394 "aliases": [ 00:07:10.394 "d8849146-0ec3-4d46-a073-1f02b18f1d33" 00:07:10.394 ], 00:07:10.394 "product_name": "Malloc disk", 00:07:10.394 "block_size": 512, 00:07:10.394 "num_blocks": 16384, 00:07:10.394 "uuid": "d8849146-0ec3-4d46-a073-1f02b18f1d33", 00:07:10.394 "assigned_rate_limits": { 00:07:10.394 "rw_ios_per_sec": 0, 00:07:10.394 "rw_mbytes_per_sec": 0, 00:07:10.394 "r_mbytes_per_sec": 0, 00:07:10.394 "w_mbytes_per_sec": 0 00:07:10.394 }, 00:07:10.394 "claimed": false, 00:07:10.394 "zoned": false, 00:07:10.394 "supported_io_types": { 00:07:10.394 "read": true, 00:07:10.394 "write": true, 00:07:10.394 "unmap": true, 00:07:10.394 "flush": true, 00:07:10.394 "reset": true, 00:07:10.394 "nvme_admin": false, 00:07:10.394 "nvme_io": false, 00:07:10.394 "nvme_io_md": false, 00:07:10.394 "write_zeroes": true, 00:07:10.394 "zcopy": true, 00:07:10.394 "get_zone_info": false, 00:07:10.394 "zone_management": false, 00:07:10.394 "zone_append": false, 00:07:10.394 "compare": false, 00:07:10.394 "compare_and_write": false, 00:07:10.394 "abort": true, 00:07:10.394 "seek_hole": false, 00:07:10.394 "seek_data": false, 00:07:10.394 "copy": true, 00:07:10.394 "nvme_iov_md": false 00:07:10.394 }, 00:07:10.394 "memory_domains": [ 00:07:10.394 { 00:07:10.394 "dma_device_id": "system", 00:07:10.394 "dma_device_type": 1 00:07:10.394 }, 00:07:10.394 { 00:07:10.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.394 "dma_device_type": 2 00:07:10.394 } 00:07:10.394 ], 00:07:10.394 "driver_specific": {} 00:07:10.394 } 00:07:10.394 ]' 00:07:10.394 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:10.394 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:10.394 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:10.394 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.394 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.394 [2024-07-25 15:55:28.180115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:10.394 [2024-07-25 15:55:28.180145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.394 [2024-07-25 15:55:28.180159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x606f240 00:07:10.394 [2024-07-25 15:55:28.180166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.394 [2024-07-25 15:55:28.180962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.395 [2024-07-25 15:55:28.180984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:10.395 Passthru0 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:10.395 { 00:07:10.395 "name": "Malloc0", 00:07:10.395 "aliases": [ 00:07:10.395 "d8849146-0ec3-4d46-a073-1f02b18f1d33" 00:07:10.395 ], 00:07:10.395 "product_name": "Malloc disk", 00:07:10.395 "block_size": 512, 00:07:10.395 "num_blocks": 16384, 00:07:10.395 "uuid": "d8849146-0ec3-4d46-a073-1f02b18f1d33", 00:07:10.395 "assigned_rate_limits": { 00:07:10.395 "rw_ios_per_sec": 0, 00:07:10.395 "rw_mbytes_per_sec": 0, 00:07:10.395 "r_mbytes_per_sec": 0, 00:07:10.395 "w_mbytes_per_sec": 0 00:07:10.395 }, 00:07:10.395 "claimed": true, 00:07:10.395 "claim_type": "exclusive_write", 00:07:10.395 "zoned": false, 00:07:10.395 "supported_io_types": { 00:07:10.395 "read": true, 00:07:10.395 "write": true, 00:07:10.395 "unmap": true, 00:07:10.395 "flush": true, 00:07:10.395 "reset": true, 00:07:10.395 "nvme_admin": false, 00:07:10.395 "nvme_io": false, 00:07:10.395 "nvme_io_md": false, 00:07:10.395 "write_zeroes": true, 00:07:10.395 "zcopy": true, 00:07:10.395 "get_zone_info": false, 00:07:10.395 "zone_management": false, 00:07:10.395 "zone_append": false, 00:07:10.395 "compare": false, 00:07:10.395 "compare_and_write": false, 00:07:10.395 "abort": true, 00:07:10.395 "seek_hole": false, 00:07:10.395 "seek_data": false, 00:07:10.395 "copy": true, 00:07:10.395 "nvme_iov_md": false 00:07:10.395 }, 00:07:10.395 "memory_domains": [ 00:07:10.395 { 00:07:10.395 "dma_device_id": "system", 00:07:10.395 "dma_device_type": 1 00:07:10.395 }, 00:07:10.395 { 00:07:10.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.395 "dma_device_type": 2 00:07:10.395 } 00:07:10.395 ], 00:07:10.395 "driver_specific": {} 00:07:10.395 }, 00:07:10.395 { 00:07:10.395 "name": "Passthru0", 00:07:10.395 "aliases": [ 00:07:10.395 "4e2b1021-83fd-55b8-8cd8-d88ac36a7a4d" 00:07:10.395 ], 00:07:10.395 "product_name": "passthru", 00:07:10.395 "block_size": 512, 00:07:10.395 "num_blocks": 16384, 00:07:10.395 "uuid": "4e2b1021-83fd-55b8-8cd8-d88ac36a7a4d", 00:07:10.395 "assigned_rate_limits": { 00:07:10.395 "rw_ios_per_sec": 0, 00:07:10.395 "rw_mbytes_per_sec": 0, 00:07:10.395 "r_mbytes_per_sec": 0, 00:07:10.395 "w_mbytes_per_sec": 0 00:07:10.395 }, 00:07:10.395 "claimed": false, 00:07:10.395 "zoned": false, 00:07:10.395 "supported_io_types": { 00:07:10.395 "read": true, 00:07:10.395 "write": true, 00:07:10.395 "unmap": true, 00:07:10.395 "flush": true, 00:07:10.395 "reset": true, 00:07:10.395 "nvme_admin": false, 00:07:10.395 "nvme_io": false, 00:07:10.395 "nvme_io_md": false, 00:07:10.395 "write_zeroes": true, 00:07:10.395 "zcopy": true, 00:07:10.395 "get_zone_info": false, 00:07:10.395 "zone_management": false, 00:07:10.395 "zone_append": false, 00:07:10.395 "compare": false, 00:07:10.395 "compare_and_write": false, 00:07:10.395 "abort": true, 00:07:10.395 "seek_hole": false, 00:07:10.395 "seek_data": false, 00:07:10.395 "copy": true, 00:07:10.395 "nvme_iov_md": false 00:07:10.395 }, 00:07:10.395 "memory_domains": [ 00:07:10.395 { 00:07:10.395 "dma_device_id": "system", 00:07:10.395 "dma_device_type": 1 00:07:10.395 }, 00:07:10.395 { 00:07:10.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.395 "dma_device_type": 2 00:07:10.395 } 00:07:10.395 ], 00:07:10.395 "driver_specific": { 00:07:10.395 "passthru": { 00:07:10.395 "name": "Passthru0", 00:07:10.395 "base_bdev_name": "Malloc0" 00:07:10.395 } 00:07:10.395 } 00:07:10.395 } 00:07:10.395 ]' 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:10.395 15:55:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:10.395 00:07:10.395 real 0m0.277s 00:07:10.395 user 0m0.178s 00:07:10.395 sys 0m0.034s 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.395 15:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 ************************************ 00:07:10.395 END TEST rpc_integrity 00:07:10.395 ************************************ 00:07:10.395 15:55:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:10.395 15:55:28 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.395 15:55:28 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.395 15:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 ************************************ 00:07:10.654 START TEST rpc_plugins 00:07:10.654 ************************************ 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:10.654 { 00:07:10.654 "name": "Malloc1", 00:07:10.654 "aliases": [ 00:07:10.654 "b30199d4-525c-4756-87af-0d978de7feb4" 00:07:10.654 ], 00:07:10.654 "product_name": "Malloc disk", 00:07:10.654 "block_size": 4096, 00:07:10.654 "num_blocks": 256, 00:07:10.654 "uuid": "b30199d4-525c-4756-87af-0d978de7feb4", 00:07:10.654 "assigned_rate_limits": { 00:07:10.654 "rw_ios_per_sec": 0, 00:07:10.654 "rw_mbytes_per_sec": 0, 00:07:10.654 "r_mbytes_per_sec": 0, 00:07:10.654 "w_mbytes_per_sec": 0 00:07:10.654 }, 00:07:10.654 "claimed": false, 00:07:10.654 "zoned": false, 00:07:10.654 "supported_io_types": { 00:07:10.654 "read": true, 00:07:10.654 "write": true, 00:07:10.654 "unmap": true, 00:07:10.654 "flush": true, 00:07:10.654 "reset": true, 00:07:10.654 "nvme_admin": false, 00:07:10.654 "nvme_io": false, 00:07:10.654 "nvme_io_md": false, 00:07:10.654 "write_zeroes": true, 00:07:10.654 "zcopy": true, 00:07:10.654 "get_zone_info": false, 00:07:10.654 "zone_management": false, 00:07:10.654 "zone_append": false, 00:07:10.654 "compare": false, 00:07:10.654 "compare_and_write": false, 00:07:10.654 "abort": true, 00:07:10.654 "seek_hole": false, 00:07:10.654 "seek_data": false, 00:07:10.654 "copy": true, 00:07:10.654 "nvme_iov_md": false 00:07:10.654 }, 00:07:10.654 "memory_domains": [ 00:07:10.654 { 00:07:10.654 "dma_device_id": "system", 00:07:10.654 "dma_device_type": 1 00:07:10.654 }, 00:07:10.654 { 00:07:10.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.654 "dma_device_type": 2 00:07:10.654 } 00:07:10.654 ], 00:07:10.654 "driver_specific": {} 00:07:10.654 } 00:07:10.654 ]' 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:10.654 15:55:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:10.654 00:07:10.654 real 0m0.138s 00:07:10.654 user 0m0.091s 00:07:10.654 sys 0m0.012s 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.654 15:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 ************************************ 00:07:10.654 END TEST rpc_plugins 00:07:10.654 ************************************ 00:07:10.654 15:55:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:10.654 15:55:28 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.654 15:55:28 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.654 15:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 ************************************ 00:07:10.654 START TEST rpc_trace_cmd_test 00:07:10.654 ************************************ 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:10.654 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid152118", 00:07:10.654 "tpoint_group_mask": "0x8", 00:07:10.654 "iscsi_conn": { 00:07:10.654 "mask": "0x2", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "scsi": { 00:07:10.654 "mask": "0x4", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "bdev": { 00:07:10.654 "mask": "0x8", 00:07:10.654 "tpoint_mask": "0xffffffffffffffff" 00:07:10.654 }, 00:07:10.654 "nvmf_rdma": { 00:07:10.654 "mask": "0x10", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "nvmf_tcp": { 00:07:10.654 "mask": "0x20", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "ftl": { 00:07:10.654 "mask": "0x40", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "blobfs": { 00:07:10.654 "mask": "0x80", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "dsa": { 00:07:10.654 "mask": "0x200", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "thread": { 00:07:10.654 "mask": "0x400", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "nvme_pcie": { 00:07:10.654 "mask": "0x800", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "iaa": { 00:07:10.654 "mask": "0x1000", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "nvme_tcp": { 00:07:10.654 "mask": "0x2000", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "bdev_nvme": { 00:07:10.654 "mask": "0x4000", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 }, 00:07:10.654 "sock": { 00:07:10.654 "mask": "0x8000", 00:07:10.654 "tpoint_mask": "0x0" 00:07:10.654 } 00:07:10.654 }' 00:07:10.654 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:10.913 00:07:10.913 real 0m0.216s 00:07:10.913 user 0m0.184s 00:07:10.913 sys 0m0.026s 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.913 15:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.913 ************************************ 00:07:10.913 END TEST rpc_trace_cmd_test 00:07:10.913 ************************************ 00:07:10.913 15:55:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:10.913 15:55:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:10.913 15:55:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:10.913 15:55:28 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.913 15:55:28 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.913 15:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.913 ************************************ 00:07:10.913 START TEST rpc_daemon_integrity 00:07:10.913 ************************************ 00:07:10.913 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:10.913 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:10.913 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.913 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.913 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.913 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:10.913 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:11.173 { 00:07:11.173 "name": "Malloc2", 00:07:11.173 "aliases": [ 00:07:11.173 "12e1f6f5-a80f-4b32-a5b7-3c1e249be523" 00:07:11.173 ], 00:07:11.173 "product_name": "Malloc disk", 00:07:11.173 "block_size": 512, 00:07:11.173 "num_blocks": 16384, 00:07:11.173 "uuid": "12e1f6f5-a80f-4b32-a5b7-3c1e249be523", 00:07:11.173 "assigned_rate_limits": { 00:07:11.173 "rw_ios_per_sec": 0, 00:07:11.173 "rw_mbytes_per_sec": 0, 00:07:11.173 "r_mbytes_per_sec": 0, 00:07:11.173 "w_mbytes_per_sec": 0 00:07:11.173 }, 00:07:11.173 "claimed": false, 00:07:11.173 "zoned": false, 00:07:11.173 "supported_io_types": { 00:07:11.173 "read": true, 00:07:11.173 "write": true, 00:07:11.173 "unmap": true, 00:07:11.173 "flush": true, 00:07:11.173 "reset": true, 00:07:11.173 "nvme_admin": false, 00:07:11.173 "nvme_io": false, 00:07:11.173 "nvme_io_md": false, 00:07:11.173 "write_zeroes": true, 00:07:11.173 "zcopy": true, 00:07:11.173 "get_zone_info": false, 00:07:11.173 "zone_management": false, 00:07:11.173 "zone_append": false, 00:07:11.173 "compare": false, 00:07:11.173 "compare_and_write": false, 00:07:11.173 "abort": true, 00:07:11.173 "seek_hole": false, 00:07:11.173 "seek_data": false, 00:07:11.173 "copy": true, 00:07:11.173 "nvme_iov_md": false 00:07:11.173 }, 00:07:11.173 "memory_domains": [ 00:07:11.173 { 00:07:11.173 "dma_device_id": "system", 00:07:11.173 "dma_device_type": 1 00:07:11.173 }, 00:07:11.173 { 00:07:11.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.173 "dma_device_type": 2 00:07:11.173 } 00:07:11.173 ], 00:07:11.173 "driver_specific": {} 00:07:11.173 } 00:07:11.173 ]' 00:07:11.173 15:55:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 [2024-07-25 15:55:29.010272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:11.173 [2024-07-25 15:55:29.010306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.173 [2024-07-25 15:55:29.010320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x602aa10 00:07:11.173 [2024-07-25 15:55:29.010326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.173 [2024-07-25 15:55:29.011058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.173 [2024-07-25 15:55:29.011078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:11.173 Passthru0 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:11.173 { 00:07:11.173 "name": "Malloc2", 00:07:11.173 "aliases": [ 00:07:11.173 "12e1f6f5-a80f-4b32-a5b7-3c1e249be523" 00:07:11.173 ], 00:07:11.173 "product_name": "Malloc disk", 00:07:11.173 "block_size": 512, 00:07:11.173 "num_blocks": 16384, 00:07:11.173 "uuid": "12e1f6f5-a80f-4b32-a5b7-3c1e249be523", 00:07:11.173 "assigned_rate_limits": { 00:07:11.173 "rw_ios_per_sec": 0, 00:07:11.173 "rw_mbytes_per_sec": 0, 00:07:11.173 "r_mbytes_per_sec": 0, 00:07:11.173 "w_mbytes_per_sec": 0 00:07:11.173 }, 00:07:11.173 "claimed": true, 00:07:11.173 "claim_type": "exclusive_write", 00:07:11.173 "zoned": false, 00:07:11.173 "supported_io_types": { 00:07:11.173 "read": true, 00:07:11.173 "write": true, 00:07:11.173 "unmap": true, 00:07:11.173 "flush": true, 00:07:11.173 "reset": true, 00:07:11.173 "nvme_admin": false, 00:07:11.173 "nvme_io": false, 00:07:11.173 "nvme_io_md": false, 00:07:11.173 "write_zeroes": true, 00:07:11.173 "zcopy": true, 00:07:11.173 "get_zone_info": false, 00:07:11.173 "zone_management": false, 00:07:11.173 "zone_append": false, 00:07:11.173 "compare": false, 00:07:11.173 "compare_and_write": false, 00:07:11.173 "abort": true, 00:07:11.173 "seek_hole": false, 00:07:11.173 "seek_data": false, 00:07:11.173 "copy": true, 00:07:11.173 "nvme_iov_md": false 00:07:11.173 }, 00:07:11.173 "memory_domains": [ 00:07:11.173 { 00:07:11.173 "dma_device_id": "system", 00:07:11.173 "dma_device_type": 1 00:07:11.173 }, 00:07:11.173 { 00:07:11.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.173 "dma_device_type": 2 00:07:11.173 } 00:07:11.173 ], 00:07:11.173 "driver_specific": {} 00:07:11.173 }, 00:07:11.173 { 00:07:11.173 "name": "Passthru0", 00:07:11.173 "aliases": [ 00:07:11.173 "44a60882-f385-5f8f-89d9-64c0a609e568" 00:07:11.173 ], 00:07:11.173 "product_name": "passthru", 00:07:11.173 "block_size": 512, 00:07:11.173 "num_blocks": 16384, 00:07:11.173 "uuid": "44a60882-f385-5f8f-89d9-64c0a609e568", 00:07:11.173 "assigned_rate_limits": { 00:07:11.173 "rw_ios_per_sec": 0, 00:07:11.173 "rw_mbytes_per_sec": 0, 00:07:11.173 "r_mbytes_per_sec": 0, 00:07:11.173 "w_mbytes_per_sec": 0 00:07:11.173 }, 00:07:11.173 "claimed": false, 00:07:11.173 "zoned": false, 00:07:11.173 "supported_io_types": { 00:07:11.173 "read": true, 00:07:11.173 "write": true, 00:07:11.173 "unmap": true, 00:07:11.173 "flush": true, 00:07:11.173 "reset": true, 00:07:11.173 "nvme_admin": false, 00:07:11.173 "nvme_io": false, 00:07:11.173 "nvme_io_md": false, 00:07:11.173 "write_zeroes": true, 00:07:11.173 "zcopy": true, 00:07:11.173 "get_zone_info": false, 00:07:11.173 "zone_management": false, 00:07:11.173 "zone_append": false, 00:07:11.173 "compare": false, 00:07:11.173 "compare_and_write": false, 00:07:11.173 "abort": true, 00:07:11.173 "seek_hole": false, 00:07:11.173 "seek_data": false, 00:07:11.173 "copy": true, 00:07:11.173 "nvme_iov_md": false 00:07:11.173 }, 00:07:11.173 "memory_domains": [ 00:07:11.173 { 00:07:11.173 "dma_device_id": "system", 00:07:11.173 "dma_device_type": 1 00:07:11.173 }, 00:07:11.173 { 00:07:11.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.173 "dma_device_type": 2 00:07:11.173 } 00:07:11.173 ], 00:07:11.173 "driver_specific": { 00:07:11.173 "passthru": { 00:07:11.173 "name": "Passthru0", 00:07:11.173 "base_bdev_name": "Malloc2" 00:07:11.173 } 00:07:11.173 } 00:07:11.173 } 00:07:11.173 ]' 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:11.173 15:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:11.173 00:07:11.173 real 0m0.281s 00:07:11.173 user 0m0.182s 00:07:11.173 sys 0m0.032s 00:07:11.174 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.174 15:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:11.174 ************************************ 00:07:11.174 END TEST rpc_daemon_integrity 00:07:11.174 ************************************ 00:07:11.433 15:55:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:11.433 15:55:29 rpc -- rpc/rpc.sh@84 -- # killprocess 152118 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@950 -- # '[' -z 152118 ']' 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@954 -- # kill -0 152118 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@955 -- # uname 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 152118 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 152118' 00:07:11.433 killing process with pid 152118 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@969 -- # kill 152118 00:07:11.433 15:55:29 rpc -- common/autotest_common.sh@974 -- # wait 152118 00:07:11.693 00:07:11.693 real 0m2.445s 00:07:11.693 user 0m3.165s 00:07:11.693 sys 0m0.659s 00:07:11.693 15:55:29 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.693 15:55:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.693 ************************************ 00:07:11.693 END TEST rpc 00:07:11.693 ************************************ 00:07:11.693 15:55:29 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:11.693 15:55:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.693 15:55:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.693 15:55:29 -- common/autotest_common.sh@10 -- # set +x 00:07:11.693 ************************************ 00:07:11.693 START TEST skip_rpc 00:07:11.693 ************************************ 00:07:11.693 15:55:29 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:11.693 * Looking for test storage... 00:07:11.693 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:11.953 15:55:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:11.953 15:55:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:11.953 15:55:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:11.953 15:55:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.953 15:55:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.953 15:55:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.953 ************************************ 00:07:11.953 START TEST skip_rpc 00:07:11.953 ************************************ 00:07:11.953 15:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:11.953 15:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=152718 00:07:11.953 15:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.953 15:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:11.953 15:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:11.953 [2024-07-25 15:55:29.739558] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:11.953 [2024-07-25 15:55:29.739613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152718 ] 00:07:11.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.953 [2024-07-25 15:55:29.810857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.953 [2024-07-25 15:55:29.883409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 152718 00:07:17.225 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 152718 ']' 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 152718 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 152718 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 152718' 00:07:17.226 killing process with pid 152718 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 152718 00:07:17.226 15:55:34 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 152718 00:07:17.226 00:07:17.226 real 0m5.348s 00:07:17.226 user 0m5.117s 00:07:17.226 sys 0m0.258s 00:07:17.226 15:55:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.226 15:55:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.226 ************************************ 00:07:17.226 END TEST skip_rpc 00:07:17.226 ************************************ 00:07:17.226 15:55:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:17.226 15:55:35 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.226 15:55:35 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.226 15:55:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.226 ************************************ 00:07:17.226 START TEST skip_rpc_with_json 00:07:17.226 ************************************ 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=153603 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 153603 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 153603 ']' 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.226 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.226 [2024-07-25 15:55:35.152650] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:17.226 [2024-07-25 15:55:35.152704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153603 ] 00:07:17.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.486 [2024-07-25 15:55:35.220481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.486 [2024-07-25 15:55:35.299823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.054 [2024-07-25 15:55:35.978919] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:18.054 request: 00:07:18.054 { 00:07:18.054 "trtype": "tcp", 00:07:18.054 "method": "nvmf_get_transports", 00:07:18.054 "req_id": 1 00:07:18.054 } 00:07:18.054 Got JSON-RPC error response 00:07:18.054 response: 00:07:18.054 { 00:07:18.054 "code": -19, 00:07:18.054 "message": "No such device" 00:07:18.054 } 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.054 [2024-07-25 15:55:35.991011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.054 15:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.314 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.314 15:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:18.314 { 00:07:18.314 "subsystems": [ 00:07:18.314 { 00:07:18.314 "subsystem": "scheduler", 00:07:18.314 "config": [ 00:07:18.314 { 00:07:18.314 "method": "framework_set_scheduler", 00:07:18.314 "params": { 00:07:18.314 "name": "static" 00:07:18.314 } 00:07:18.314 } 00:07:18.314 ] 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "subsystem": "vmd", 00:07:18.314 "config": [] 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "subsystem": "sock", 00:07:18.314 "config": [ 00:07:18.314 { 00:07:18.314 "method": "sock_set_default_impl", 00:07:18.314 "params": { 00:07:18.314 "impl_name": "posix" 00:07:18.314 } 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "method": "sock_impl_set_options", 00:07:18.314 "params": { 00:07:18.314 "impl_name": "ssl", 00:07:18.314 "recv_buf_size": 4096, 00:07:18.314 "send_buf_size": 4096, 00:07:18.314 "enable_recv_pipe": true, 00:07:18.314 "enable_quickack": false, 00:07:18.314 "enable_placement_id": 0, 00:07:18.314 "enable_zerocopy_send_server": true, 00:07:18.314 "enable_zerocopy_send_client": false, 00:07:18.314 "zerocopy_threshold": 0, 00:07:18.314 "tls_version": 0, 00:07:18.314 "enable_ktls": false 00:07:18.314 } 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "method": "sock_impl_set_options", 00:07:18.314 "params": { 00:07:18.314 "impl_name": "posix", 00:07:18.314 "recv_buf_size": 2097152, 00:07:18.314 "send_buf_size": 2097152, 00:07:18.314 "enable_recv_pipe": true, 00:07:18.314 "enable_quickack": false, 00:07:18.314 "enable_placement_id": 0, 00:07:18.314 "enable_zerocopy_send_server": true, 00:07:18.314 "enable_zerocopy_send_client": false, 00:07:18.314 "zerocopy_threshold": 0, 00:07:18.314 "tls_version": 0, 00:07:18.314 "enable_ktls": false 00:07:18.314 } 00:07:18.314 } 00:07:18.314 ] 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "subsystem": "iobuf", 00:07:18.314 "config": [ 00:07:18.314 { 00:07:18.314 "method": "iobuf_set_options", 00:07:18.314 "params": { 00:07:18.314 "small_pool_count": 8192, 00:07:18.314 "large_pool_count": 1024, 00:07:18.314 "small_bufsize": 8192, 00:07:18.314 "large_bufsize": 135168 00:07:18.314 } 00:07:18.314 } 00:07:18.314 ] 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "subsystem": "keyring", 00:07:18.314 "config": [] 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "subsystem": "vfio_user_target", 00:07:18.314 "config": null 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "subsystem": "accel", 00:07:18.314 "config": [ 00:07:18.314 { 00:07:18.314 "method": "accel_set_options", 00:07:18.314 "params": { 00:07:18.314 "small_cache_size": 128, 00:07:18.314 "large_cache_size": 16, 00:07:18.314 "task_count": 2048, 00:07:18.314 "sequence_count": 2048, 00:07:18.314 "buf_count": 2048 00:07:18.314 } 00:07:18.314 } 00:07:18.314 ] 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "subsystem": "bdev", 00:07:18.314 "config": [ 00:07:18.314 { 00:07:18.314 "method": "bdev_set_options", 00:07:18.314 "params": { 00:07:18.314 "bdev_io_pool_size": 65535, 00:07:18.314 "bdev_io_cache_size": 256, 00:07:18.314 "bdev_auto_examine": true, 00:07:18.314 "iobuf_small_cache_size": 128, 00:07:18.314 "iobuf_large_cache_size": 16 00:07:18.314 } 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "method": "bdev_raid_set_options", 00:07:18.314 "params": { 00:07:18.314 "process_window_size_kb": 1024, 00:07:18.314 "process_max_bandwidth_mb_sec": 0 00:07:18.314 } 00:07:18.314 }, 00:07:18.314 { 00:07:18.314 "method": "bdev_nvme_set_options", 00:07:18.314 "params": { 00:07:18.314 "action_on_timeout": "none", 00:07:18.314 "timeout_us": 0, 00:07:18.314 "timeout_admin_us": 0, 00:07:18.314 "keep_alive_timeout_ms": 10000, 00:07:18.314 "arbitration_burst": 0, 00:07:18.314 "low_priority_weight": 0, 00:07:18.314 "medium_priority_weight": 0, 00:07:18.314 "high_priority_weight": 0, 00:07:18.314 "nvme_adminq_poll_period_us": 10000, 00:07:18.314 "nvme_ioq_poll_period_us": 0, 00:07:18.314 "io_queue_requests": 0, 00:07:18.314 "delay_cmd_submit": true, 00:07:18.314 "transport_retry_count": 4, 00:07:18.314 "bdev_retry_count": 3, 00:07:18.314 "transport_ack_timeout": 0, 00:07:18.314 "ctrlr_loss_timeout_sec": 0, 00:07:18.314 "reconnect_delay_sec": 0, 00:07:18.314 "fast_io_fail_timeout_sec": 0, 00:07:18.314 "disable_auto_failback": false, 00:07:18.314 "generate_uuids": false, 00:07:18.314 "transport_tos": 0, 00:07:18.315 "nvme_error_stat": false, 00:07:18.315 "rdma_srq_size": 0, 00:07:18.315 "io_path_stat": false, 00:07:18.315 "allow_accel_sequence": false, 00:07:18.315 "rdma_max_cq_size": 0, 00:07:18.315 "rdma_cm_event_timeout_ms": 0, 00:07:18.315 "dhchap_digests": [ 00:07:18.315 "sha256", 00:07:18.315 "sha384", 00:07:18.315 "sha512" 00:07:18.315 ], 00:07:18.315 "dhchap_dhgroups": [ 00:07:18.315 "null", 00:07:18.315 "ffdhe2048", 00:07:18.315 "ffdhe3072", 00:07:18.315 "ffdhe4096", 00:07:18.315 "ffdhe6144", 00:07:18.315 "ffdhe8192" 00:07:18.315 ] 00:07:18.315 } 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "method": "bdev_nvme_set_hotplug", 00:07:18.315 "params": { 00:07:18.315 "period_us": 100000, 00:07:18.315 "enable": false 00:07:18.315 } 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "method": "bdev_iscsi_set_options", 00:07:18.315 "params": { 00:07:18.315 "timeout_sec": 30 00:07:18.315 } 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "method": "bdev_wait_for_examine" 00:07:18.315 } 00:07:18.315 ] 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "subsystem": "nvmf", 00:07:18.315 "config": [ 00:07:18.315 { 00:07:18.315 "method": "nvmf_set_config", 00:07:18.315 "params": { 00:07:18.315 "discovery_filter": "match_any", 00:07:18.315 "admin_cmd_passthru": { 00:07:18.315 "identify_ctrlr": false 00:07:18.315 } 00:07:18.315 } 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "method": "nvmf_set_max_subsystems", 00:07:18.315 "params": { 00:07:18.315 "max_subsystems": 1024 00:07:18.315 } 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "method": "nvmf_set_crdt", 00:07:18.315 "params": { 00:07:18.315 "crdt1": 0, 00:07:18.315 "crdt2": 0, 00:07:18.315 "crdt3": 0 00:07:18.315 } 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "method": "nvmf_create_transport", 00:07:18.315 "params": { 00:07:18.315 "trtype": "TCP", 00:07:18.315 "max_queue_depth": 128, 00:07:18.315 "max_io_qpairs_per_ctrlr": 127, 00:07:18.315 "in_capsule_data_size": 4096, 00:07:18.315 "max_io_size": 131072, 00:07:18.315 "io_unit_size": 131072, 00:07:18.315 "max_aq_depth": 128, 00:07:18.315 "num_shared_buffers": 511, 00:07:18.315 "buf_cache_size": 4294967295, 00:07:18.315 "dif_insert_or_strip": false, 00:07:18.315 "zcopy": false, 00:07:18.315 "c2h_success": true, 00:07:18.315 "sock_priority": 0, 00:07:18.315 "abort_timeout_sec": 1, 00:07:18.315 "ack_timeout": 0, 00:07:18.315 "data_wr_pool_size": 0 00:07:18.315 } 00:07:18.315 } 00:07:18.315 ] 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "subsystem": "nbd", 00:07:18.315 "config": [] 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "subsystem": "ublk", 00:07:18.315 "config": [] 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "subsystem": "vhost_blk", 00:07:18.315 "config": [] 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "subsystem": "scsi", 00:07:18.315 "config": null 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "subsystem": "iscsi", 00:07:18.315 "config": [ 00:07:18.315 { 00:07:18.315 "method": "iscsi_set_options", 00:07:18.315 "params": { 00:07:18.315 "node_base": "iqn.2016-06.io.spdk", 00:07:18.315 "max_sessions": 128, 00:07:18.315 "max_connections_per_session": 2, 00:07:18.315 "max_queue_depth": 64, 00:07:18.315 "default_time2wait": 2, 00:07:18.315 "default_time2retain": 20, 00:07:18.315 "first_burst_length": 8192, 00:07:18.315 "immediate_data": true, 00:07:18.315 "allow_duplicated_isid": false, 00:07:18.315 "error_recovery_level": 0, 00:07:18.315 "nop_timeout": 60, 00:07:18.315 "nop_in_interval": 30, 00:07:18.315 "disable_chap": false, 00:07:18.315 "require_chap": false, 00:07:18.315 "mutual_chap": false, 00:07:18.315 "chap_group": 0, 00:07:18.315 "max_large_datain_per_connection": 64, 00:07:18.315 "max_r2t_per_connection": 4, 00:07:18.315 "pdu_pool_size": 36864, 00:07:18.315 "immediate_data_pool_size": 16384, 00:07:18.315 "data_out_pool_size": 2048 00:07:18.315 } 00:07:18.315 } 00:07:18.315 ] 00:07:18.315 }, 00:07:18.315 { 00:07:18.315 "subsystem": "vhost_scsi", 00:07:18.315 "config": [] 00:07:18.315 } 00:07:18.315 ] 00:07:18.315 } 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 153603 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 153603 ']' 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 153603 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153603 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153603' 00:07:18.315 killing process with pid 153603 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 153603 00:07:18.315 15:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 153603 00:07:18.574 15:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=153825 00:07:18.574 15:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:18.574 15:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 153825 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 153825 ']' 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 153825 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153825 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153825' 00:07:23.847 killing process with pid 153825 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 153825 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 153825 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:23.847 00:07:23.847 real 0m6.706s 00:07:23.847 user 0m6.522s 00:07:23.847 sys 0m0.588s 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.847 15:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:23.847 ************************************ 00:07:23.847 END TEST skip_rpc_with_json 00:07:23.847 ************************************ 00:07:24.106 15:55:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:24.106 15:55:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.106 15:55:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.106 15:55:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.106 ************************************ 00:07:24.106 START TEST skip_rpc_with_delay 00:07:24.106 ************************************ 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:24.106 [2024-07-25 15:55:41.926703] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:24.106 [2024-07-25 15:55:41.926819] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.106 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.106 00:07:24.106 real 0m0.037s 00:07:24.106 user 0m0.021s 00:07:24.106 sys 0m0.016s 00:07:24.107 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.107 15:55:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:24.107 ************************************ 00:07:24.107 END TEST skip_rpc_with_delay 00:07:24.107 ************************************ 00:07:24.107 15:55:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:24.107 15:55:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:24.107 15:55:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:24.107 15:55:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.107 15:55:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.107 15:55:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.107 ************************************ 00:07:24.107 START TEST exit_on_failed_rpc_init 00:07:24.107 ************************************ 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=154734 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 154734 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 154734 ']' 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.107 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:24.107 [2024-07-25 15:55:42.037124] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:24.107 [2024-07-25 15:55:42.037200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154734 ] 00:07:24.107 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.366 [2024-07-25 15:55:42.111528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.366 [2024-07-25 15:55:42.187752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.934 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.935 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.935 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.935 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.935 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.935 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:24.935 15:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:24.935 [2024-07-25 15:55:42.898929] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:24.935 [2024-07-25 15:55:42.899001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154945 ] 00:07:25.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.194 [2024-07-25 15:55:42.967228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.194 [2024-07-25 15:55:43.041521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.194 [2024-07-25 15:55:43.041596] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:25.194 [2024-07-25 15:55:43.041607] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:25.194 [2024-07-25 15:55:43.041612] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 154734 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 154734 ']' 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 154734 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 154734 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.194 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 154734' 00:07:25.194 killing process with pid 154734 00:07:25.195 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 154734 00:07:25.195 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 154734 00:07:25.764 00:07:25.764 real 0m1.438s 00:07:25.764 user 0m1.628s 00:07:25.764 sys 0m0.416s 00:07:25.764 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.764 15:55:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:25.764 ************************************ 00:07:25.764 END TEST exit_on_failed_rpc_init 00:07:25.764 ************************************ 00:07:25.764 15:55:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:25.764 00:07:25.764 real 0m13.891s 00:07:25.764 user 0m13.430s 00:07:25.764 sys 0m1.525s 00:07:25.764 15:55:43 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.764 15:55:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.764 ************************************ 00:07:25.764 END TEST skip_rpc 00:07:25.764 ************************************ 00:07:25.764 15:55:43 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:25.764 15:55:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.764 15:55:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.764 15:55:43 -- common/autotest_common.sh@10 -- # set +x 00:07:25.764 ************************************ 00:07:25.764 START TEST rpc_client 00:07:25.764 ************************************ 00:07:25.764 15:55:43 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:25.764 * Looking for test storage... 00:07:25.764 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:07:25.764 15:55:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:25.764 OK 00:07:25.764 15:55:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:25.764 00:07:25.764 real 0m0.110s 00:07:25.764 user 0m0.043s 00:07:25.764 sys 0m0.075s 00:07:25.764 15:55:43 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.764 15:55:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:25.764 ************************************ 00:07:25.764 END TEST rpc_client 00:07:25.764 ************************************ 00:07:25.764 15:55:43 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:25.764 15:55:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.764 15:55:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.764 15:55:43 -- common/autotest_common.sh@10 -- # set +x 00:07:25.764 ************************************ 00:07:25.764 START TEST json_config 00:07:25.764 ************************************ 00:07:25.764 15:55:43 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:26.025 15:55:43 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.025 15:55:43 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.025 15:55:43 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.025 15:55:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.025 15:55:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.025 15:55:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.025 15:55:43 json_config -- paths/export.sh@5 -- # export PATH 00:07:26.025 15:55:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@47 -- # : 0 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.025 15:55:43 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:26.025 WARNING: No tests are enabled so not running JSON configuration tests 00:07:26.025 15:55:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:26.025 00:07:26.025 real 0m0.091s 00:07:26.025 user 0m0.050s 00:07:26.025 sys 0m0.042s 00:07:26.025 15:55:43 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.025 15:55:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 ************************************ 00:07:26.025 END TEST json_config 00:07:26.025 ************************************ 00:07:26.025 15:55:43 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:26.025 15:55:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.025 15:55:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.025 15:55:43 -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 ************************************ 00:07:26.025 START TEST json_config_extra_key 00:07:26.025 ************************************ 00:07:26.025 15:55:43 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:26.025 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.025 15:55:43 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:26.025 15:55:43 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.025 15:55:43 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.025 15:55:43 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.025 15:55:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.026 15:55:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.026 15:55:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.026 15:55:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:26.026 15:55:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.026 15:55:43 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:26.026 INFO: launching applications... 00:07:26.026 15:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=155299 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:26.026 Waiting for target to run... 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 155299 /var/tmp/spdk_tgt.sock 00:07:26.026 15:55:43 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 155299 ']' 00:07:26.026 15:55:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:26.026 15:55:43 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:26.026 15:55:43 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.026 15:55:43 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:26.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:26.026 15:55:43 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.026 15:55:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:26.026 [2024-07-25 15:55:44.000055] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:26.026 [2024-07-25 15:55:44.000133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155299 ] 00:07:26.285 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.285 [2024-07-25 15:55:44.272397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.545 [2024-07-25 15:55:44.337223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.115 15:55:44 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.115 15:55:44 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:27.115 00:07:27.115 15:55:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:27.115 INFO: shutting down applications... 00:07:27.115 15:55:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 155299 ]] 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 155299 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155299 00:07:27.115 15:55:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:27.374 15:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:27.374 15:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:27.374 15:55:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155299 00:07:27.374 15:55:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:27.374 15:55:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:27.374 15:55:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:27.374 15:55:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:27.374 SPDK target shutdown done 00:07:27.374 15:55:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:27.374 Success 00:07:27.374 00:07:27.374 real 0m1.440s 00:07:27.374 user 0m1.231s 00:07:27.374 sys 0m0.348s 00:07:27.374 15:55:45 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.374 15:55:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:27.374 ************************************ 00:07:27.374 END TEST json_config_extra_key 00:07:27.374 ************************************ 00:07:27.374 15:55:45 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:27.374 15:55:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.374 15:55:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.374 15:55:45 -- common/autotest_common.sh@10 -- # set +x 00:07:27.634 ************************************ 00:07:27.634 START TEST alias_rpc 00:07:27.634 ************************************ 00:07:27.634 15:55:45 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:27.634 * Looking for test storage... 00:07:27.634 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:07:27.634 15:55:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.634 15:55:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=155566 00:07:27.634 15:55:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 155566 00:07:27.634 15:55:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:27.634 15:55:45 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 155566 ']' 00:07:27.634 15:55:45 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.634 15:55:45 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.634 15:55:45 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.634 15:55:45 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.634 15:55:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.634 [2024-07-25 15:55:45.504477] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:27.634 [2024-07-25 15:55:45.504560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155566 ] 00:07:27.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.634 [2024-07-25 15:55:45.577419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.893 [2024-07-25 15:55:45.654634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.462 15:55:46 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.462 15:55:46 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:28.462 15:55:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:28.721 15:55:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 155566 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 155566 ']' 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 155566 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 155566 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 155566' 00:07:28.721 killing process with pid 155566 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@969 -- # kill 155566 00:07:28.721 15:55:46 alias_rpc -- common/autotest_common.sh@974 -- # wait 155566 00:07:28.981 00:07:28.981 real 0m1.460s 00:07:28.981 user 0m1.584s 00:07:28.981 sys 0m0.406s 00:07:28.981 15:55:46 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.981 15:55:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.981 ************************************ 00:07:28.981 END TEST alias_rpc 00:07:28.981 ************************************ 00:07:28.981 15:55:46 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:28.981 15:55:46 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:28.981 15:55:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.981 15:55:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.981 15:55:46 -- common/autotest_common.sh@10 -- # set +x 00:07:28.981 ************************************ 00:07:28.981 START TEST spdkcli_tcp 00:07:28.981 ************************************ 00:07:28.981 15:55:46 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:29.240 * Looking for test storage... 00:07:29.240 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=155834 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 155834 00:07:29.240 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 155834 ']' 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.240 15:55:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.240 [2024-07-25 15:55:47.038242] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:29.240 [2024-07-25 15:55:47.038326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155834 ] 00:07:29.240 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.240 [2024-07-25 15:55:47.111084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.240 [2024-07-25 15:55:47.185704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.240 [2024-07-25 15:55:47.185705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.179 15:55:47 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.179 15:55:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:30.180 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=156051 00:07:30.180 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:30.180 15:55:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:30.180 [ 00:07:30.180 "spdk_get_version", 00:07:30.180 "rpc_get_methods", 00:07:30.180 "trace_get_info", 00:07:30.180 "trace_get_tpoint_group_mask", 00:07:30.180 "trace_disable_tpoint_group", 00:07:30.180 "trace_enable_tpoint_group", 00:07:30.180 "trace_clear_tpoint_mask", 00:07:30.180 "trace_set_tpoint_mask", 00:07:30.180 "vfu_tgt_set_base_path", 00:07:30.180 "framework_get_pci_devices", 00:07:30.180 "framework_get_config", 00:07:30.180 "framework_get_subsystems", 00:07:30.180 "keyring_get_keys", 00:07:30.180 "iobuf_get_stats", 00:07:30.180 "iobuf_set_options", 00:07:30.180 "sock_get_default_impl", 00:07:30.180 "sock_set_default_impl", 00:07:30.180 "sock_impl_set_options", 00:07:30.180 "sock_impl_get_options", 00:07:30.180 "vmd_rescan", 00:07:30.180 "vmd_remove_device", 00:07:30.180 "vmd_enable", 00:07:30.180 "accel_get_stats", 00:07:30.180 "accel_set_options", 00:07:30.180 "accel_set_driver", 00:07:30.180 "accel_crypto_key_destroy", 00:07:30.180 "accel_crypto_keys_get", 00:07:30.180 "accel_crypto_key_create", 00:07:30.180 "accel_assign_opc", 00:07:30.180 "accel_get_module_info", 00:07:30.180 "accel_get_opc_assignments", 00:07:30.180 "notify_get_notifications", 00:07:30.180 "notify_get_types", 00:07:30.180 "bdev_get_histogram", 00:07:30.180 "bdev_enable_histogram", 00:07:30.180 "bdev_set_qos_limit", 00:07:30.180 "bdev_set_qd_sampling_period", 00:07:30.180 "bdev_get_bdevs", 00:07:30.180 "bdev_reset_iostat", 00:07:30.180 "bdev_get_iostat", 00:07:30.180 "bdev_examine", 00:07:30.180 "bdev_wait_for_examine", 00:07:30.180 "bdev_set_options", 00:07:30.180 "scsi_get_devices", 00:07:30.180 "thread_set_cpumask", 00:07:30.180 "framework_get_governor", 00:07:30.180 "framework_get_scheduler", 00:07:30.180 "framework_set_scheduler", 00:07:30.180 "framework_get_reactors", 00:07:30.180 "thread_get_io_channels", 00:07:30.180 "thread_get_pollers", 00:07:30.180 "thread_get_stats", 00:07:30.180 "framework_monitor_context_switch", 00:07:30.180 "spdk_kill_instance", 00:07:30.180 "log_enable_timestamps", 00:07:30.180 "log_get_flags", 00:07:30.180 "log_clear_flag", 00:07:30.180 "log_set_flag", 00:07:30.180 "log_get_level", 00:07:30.180 "log_set_level", 00:07:30.180 "log_get_print_level", 00:07:30.180 "log_set_print_level", 00:07:30.180 "framework_enable_cpumask_locks", 00:07:30.180 "framework_disable_cpumask_locks", 00:07:30.180 "framework_wait_init", 00:07:30.180 "framework_start_init", 00:07:30.180 "virtio_blk_create_transport", 00:07:30.180 "virtio_blk_get_transports", 00:07:30.180 "vhost_controller_set_coalescing", 00:07:30.180 "vhost_get_controllers", 00:07:30.180 "vhost_delete_controller", 00:07:30.180 "vhost_create_blk_controller", 00:07:30.180 "vhost_scsi_controller_remove_target", 00:07:30.180 "vhost_scsi_controller_add_target", 00:07:30.180 "vhost_start_scsi_controller", 00:07:30.180 "vhost_create_scsi_controller", 00:07:30.180 "ublk_recover_disk", 00:07:30.180 "ublk_get_disks", 00:07:30.180 "ublk_stop_disk", 00:07:30.180 "ublk_start_disk", 00:07:30.180 "ublk_destroy_target", 00:07:30.180 "ublk_create_target", 00:07:30.180 "nbd_get_disks", 00:07:30.180 "nbd_stop_disk", 00:07:30.180 "nbd_start_disk", 00:07:30.180 "env_dpdk_get_mem_stats", 00:07:30.180 "nvmf_stop_mdns_prr", 00:07:30.180 "nvmf_publish_mdns_prr", 00:07:30.180 "nvmf_subsystem_get_listeners", 00:07:30.180 "nvmf_subsystem_get_qpairs", 00:07:30.180 "nvmf_subsystem_get_controllers", 00:07:30.180 "nvmf_get_stats", 00:07:30.180 "nvmf_get_transports", 00:07:30.180 "nvmf_create_transport", 00:07:30.180 "nvmf_get_targets", 00:07:30.180 "nvmf_delete_target", 00:07:30.180 "nvmf_create_target", 00:07:30.180 "nvmf_subsystem_allow_any_host", 00:07:30.180 "nvmf_subsystem_remove_host", 00:07:30.180 "nvmf_subsystem_add_host", 00:07:30.180 "nvmf_ns_remove_host", 00:07:30.180 "nvmf_ns_add_host", 00:07:30.180 "nvmf_subsystem_remove_ns", 00:07:30.180 "nvmf_subsystem_add_ns", 00:07:30.180 "nvmf_subsystem_listener_set_ana_state", 00:07:30.180 "nvmf_discovery_get_referrals", 00:07:30.180 "nvmf_discovery_remove_referral", 00:07:30.180 "nvmf_discovery_add_referral", 00:07:30.180 "nvmf_subsystem_remove_listener", 00:07:30.180 "nvmf_subsystem_add_listener", 00:07:30.180 "nvmf_delete_subsystem", 00:07:30.180 "nvmf_create_subsystem", 00:07:30.180 "nvmf_get_subsystems", 00:07:30.180 "nvmf_set_crdt", 00:07:30.180 "nvmf_set_config", 00:07:30.180 "nvmf_set_max_subsystems", 00:07:30.180 "iscsi_get_histogram", 00:07:30.180 "iscsi_enable_histogram", 00:07:30.180 "iscsi_set_options", 00:07:30.180 "iscsi_get_auth_groups", 00:07:30.180 "iscsi_auth_group_remove_secret", 00:07:30.180 "iscsi_auth_group_add_secret", 00:07:30.180 "iscsi_delete_auth_group", 00:07:30.180 "iscsi_create_auth_group", 00:07:30.180 "iscsi_set_discovery_auth", 00:07:30.180 "iscsi_get_options", 00:07:30.180 "iscsi_target_node_request_logout", 00:07:30.180 "iscsi_target_node_set_redirect", 00:07:30.180 "iscsi_target_node_set_auth", 00:07:30.180 "iscsi_target_node_add_lun", 00:07:30.180 "iscsi_get_stats", 00:07:30.180 "iscsi_get_connections", 00:07:30.180 "iscsi_portal_group_set_auth", 00:07:30.180 "iscsi_start_portal_group", 00:07:30.180 "iscsi_delete_portal_group", 00:07:30.180 "iscsi_create_portal_group", 00:07:30.180 "iscsi_get_portal_groups", 00:07:30.180 "iscsi_delete_target_node", 00:07:30.180 "iscsi_target_node_remove_pg_ig_maps", 00:07:30.180 "iscsi_target_node_add_pg_ig_maps", 00:07:30.180 "iscsi_create_target_node", 00:07:30.180 "iscsi_get_target_nodes", 00:07:30.180 "iscsi_delete_initiator_group", 00:07:30.180 "iscsi_initiator_group_remove_initiators", 00:07:30.180 "iscsi_initiator_group_add_initiators", 00:07:30.180 "iscsi_create_initiator_group", 00:07:30.180 "iscsi_get_initiator_groups", 00:07:30.180 "keyring_linux_set_options", 00:07:30.180 "keyring_file_remove_key", 00:07:30.180 "keyring_file_add_key", 00:07:30.180 "vfu_virtio_create_scsi_endpoint", 00:07:30.180 "vfu_virtio_scsi_remove_target", 00:07:30.180 "vfu_virtio_scsi_add_target", 00:07:30.180 "vfu_virtio_create_blk_endpoint", 00:07:30.180 "vfu_virtio_delete_endpoint", 00:07:30.180 "iaa_scan_accel_module", 00:07:30.180 "dsa_scan_accel_module", 00:07:30.180 "ioat_scan_accel_module", 00:07:30.180 "accel_error_inject_error", 00:07:30.180 "bdev_iscsi_delete", 00:07:30.180 "bdev_iscsi_create", 00:07:30.180 "bdev_iscsi_set_options", 00:07:30.180 "bdev_virtio_attach_controller", 00:07:30.180 "bdev_virtio_scsi_get_devices", 00:07:30.180 "bdev_virtio_detach_controller", 00:07:30.180 "bdev_virtio_blk_set_hotplug", 00:07:30.180 "bdev_ftl_set_property", 00:07:30.180 "bdev_ftl_get_properties", 00:07:30.180 "bdev_ftl_get_stats", 00:07:30.180 "bdev_ftl_unmap", 00:07:30.180 "bdev_ftl_unload", 00:07:30.180 "bdev_ftl_delete", 00:07:30.180 "bdev_ftl_load", 00:07:30.180 "bdev_ftl_create", 00:07:30.180 "bdev_aio_delete", 00:07:30.180 "bdev_aio_rescan", 00:07:30.180 "bdev_aio_create", 00:07:30.180 "blobfs_create", 00:07:30.180 "blobfs_detect", 00:07:30.180 "blobfs_set_cache_size", 00:07:30.180 "bdev_zone_block_delete", 00:07:30.180 "bdev_zone_block_create", 00:07:30.180 "bdev_delay_delete", 00:07:30.180 "bdev_delay_create", 00:07:30.180 "bdev_delay_update_latency", 00:07:30.180 "bdev_split_delete", 00:07:30.180 "bdev_split_create", 00:07:30.180 "bdev_error_inject_error", 00:07:30.180 "bdev_error_delete", 00:07:30.180 "bdev_error_create", 00:07:30.180 "bdev_raid_set_options", 00:07:30.180 "bdev_raid_remove_base_bdev", 00:07:30.180 "bdev_raid_add_base_bdev", 00:07:30.180 "bdev_raid_delete", 00:07:30.180 "bdev_raid_create", 00:07:30.180 "bdev_raid_get_bdevs", 00:07:30.180 "bdev_lvol_set_parent_bdev", 00:07:30.180 "bdev_lvol_set_parent", 00:07:30.180 "bdev_lvol_check_shallow_copy", 00:07:30.180 "bdev_lvol_start_shallow_copy", 00:07:30.180 "bdev_lvol_grow_lvstore", 00:07:30.180 "bdev_lvol_get_lvols", 00:07:30.180 "bdev_lvol_get_lvstores", 00:07:30.180 "bdev_lvol_delete", 00:07:30.180 "bdev_lvol_set_read_only", 00:07:30.180 "bdev_lvol_resize", 00:07:30.180 "bdev_lvol_decouple_parent", 00:07:30.180 "bdev_lvol_inflate", 00:07:30.180 "bdev_lvol_rename", 00:07:30.180 "bdev_lvol_clone_bdev", 00:07:30.180 "bdev_lvol_clone", 00:07:30.180 "bdev_lvol_snapshot", 00:07:30.180 "bdev_lvol_create", 00:07:30.180 "bdev_lvol_delete_lvstore", 00:07:30.180 "bdev_lvol_rename_lvstore", 00:07:30.180 "bdev_lvol_create_lvstore", 00:07:30.181 "bdev_passthru_delete", 00:07:30.181 "bdev_passthru_create", 00:07:30.181 "bdev_nvme_cuse_unregister", 00:07:30.181 "bdev_nvme_cuse_register", 00:07:30.181 "bdev_opal_new_user", 00:07:30.181 "bdev_opal_set_lock_state", 00:07:30.181 "bdev_opal_delete", 00:07:30.181 "bdev_opal_get_info", 00:07:30.181 "bdev_opal_create", 00:07:30.181 "bdev_nvme_opal_revert", 00:07:30.181 "bdev_nvme_opal_init", 00:07:30.181 "bdev_nvme_send_cmd", 00:07:30.181 "bdev_nvme_get_path_iostat", 00:07:30.181 "bdev_nvme_get_mdns_discovery_info", 00:07:30.181 "bdev_nvme_stop_mdns_discovery", 00:07:30.181 "bdev_nvme_start_mdns_discovery", 00:07:30.181 "bdev_nvme_set_multipath_policy", 00:07:30.181 "bdev_nvme_set_preferred_path", 00:07:30.181 "bdev_nvme_get_io_paths", 00:07:30.181 "bdev_nvme_remove_error_injection", 00:07:30.181 "bdev_nvme_add_error_injection", 00:07:30.181 "bdev_nvme_get_discovery_info", 00:07:30.181 "bdev_nvme_stop_discovery", 00:07:30.181 "bdev_nvme_start_discovery", 00:07:30.181 "bdev_nvme_get_controller_health_info", 00:07:30.181 "bdev_nvme_disable_controller", 00:07:30.181 "bdev_nvme_enable_controller", 00:07:30.181 "bdev_nvme_reset_controller", 00:07:30.181 "bdev_nvme_get_transport_statistics", 00:07:30.181 "bdev_nvme_apply_firmware", 00:07:30.181 "bdev_nvme_detach_controller", 00:07:30.181 "bdev_nvme_get_controllers", 00:07:30.181 "bdev_nvme_attach_controller", 00:07:30.181 "bdev_nvme_set_hotplug", 00:07:30.181 "bdev_nvme_set_options", 00:07:30.181 "bdev_null_resize", 00:07:30.181 "bdev_null_delete", 00:07:30.181 "bdev_null_create", 00:07:30.181 "bdev_malloc_delete", 00:07:30.181 "bdev_malloc_create" 00:07:30.181 ] 00:07:30.181 15:55:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.181 15:55:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:30.181 15:55:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 155834 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 155834 ']' 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 155834 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 155834 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 155834' 00:07:30.181 killing process with pid 155834 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 155834 00:07:30.181 15:55:48 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 155834 00:07:30.440 00:07:30.440 real 0m1.497s 00:07:30.440 user 0m2.788s 00:07:30.440 sys 0m0.446s 00:07:30.440 15:55:48 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.440 15:55:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.440 ************************************ 00:07:30.440 END TEST spdkcli_tcp 00:07:30.440 ************************************ 00:07:30.700 15:55:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:30.700 15:55:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.700 15:55:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.700 15:55:48 -- common/autotest_common.sh@10 -- # set +x 00:07:30.700 ************************************ 00:07:30.700 START TEST dpdk_mem_utility 00:07:30.700 ************************************ 00:07:30.700 15:55:48 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:30.700 * Looking for test storage... 00:07:30.700 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:07:30.700 15:55:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:30.700 15:55:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=156126 00:07:30.700 15:55:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 156126 00:07:30.700 15:55:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.700 15:55:48 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 156126 ']' 00:07:30.700 15:55:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.700 15:55:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.700 15:55:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.700 15:55:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.700 15:55:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:30.700 [2024-07-25 15:55:48.595568] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:30.700 [2024-07-25 15:55:48.595644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156126 ] 00:07:30.700 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.700 [2024-07-25 15:55:48.668114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.959 [2024-07-25 15:55:48.748150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.528 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.528 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:31.528 15:55:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:31.528 15:55:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:31.528 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.528 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:31.528 { 00:07:31.528 "filename": "/tmp/spdk_mem_dump.txt" 00:07:31.528 } 00:07:31.528 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.528 15:55:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:31.528 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:31.528 1 heaps totaling size 814.000000 MiB 00:07:31.529 size: 814.000000 MiB heap id: 0 00:07:31.529 end heaps---------- 00:07:31.529 8 mempools totaling size 598.116089 MiB 00:07:31.529 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:31.529 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:31.529 size: 84.521057 MiB name: bdev_io_156126 00:07:31.529 size: 51.011292 MiB name: evtpool_156126 00:07:31.529 size: 50.003479 MiB name: msgpool_156126 00:07:31.529 size: 21.763794 MiB name: PDU_Pool 00:07:31.529 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:31.529 size: 0.026123 MiB name: Session_Pool 00:07:31.529 end mempools------- 00:07:31.529 6 memzones totaling size 4.142822 MiB 00:07:31.529 size: 1.000366 MiB name: RG_ring_0_156126 00:07:31.529 size: 1.000366 MiB name: RG_ring_1_156126 00:07:31.529 size: 1.000366 MiB name: RG_ring_4_156126 00:07:31.529 size: 1.000366 MiB name: RG_ring_5_156126 00:07:31.529 size: 0.125366 MiB name: RG_ring_2_156126 00:07:31.529 size: 0.015991 MiB name: RG_ring_3_156126 00:07:31.529 end memzones------- 00:07:31.529 15:55:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:31.789 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:31.789 list of free elements. size: 12.519348 MiB 00:07:31.789 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:31.789 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:31.789 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:31.789 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:31.789 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:31.789 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:31.789 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:31.789 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:31.789 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:31.789 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:31.789 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:31.789 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:31.789 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:31.789 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:31.789 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:31.789 list of standard malloc elements. size: 199.218079 MiB 00:07:31.789 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:31.789 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:31.789 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:31.789 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:31.789 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:31.789 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:31.789 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:31.789 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:31.789 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:31.789 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:31.789 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:31.789 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:31.789 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:31.789 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:31.789 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:31.789 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:31.789 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:31.789 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:31.789 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:31.789 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:31.789 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:31.789 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:31.789 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:31.790 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:31.790 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:31.790 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:31.790 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:31.790 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:31.790 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:31.790 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:31.790 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:31.790 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:31.790 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:31.790 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:31.790 list of memzone associated elements. size: 602.262573 MiB 00:07:31.790 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:31.790 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:31.790 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:31.790 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:31.790 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:31.790 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_156126_0 00:07:31.790 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:31.790 associated memzone info: size: 48.002930 MiB name: MP_evtpool_156126_0 00:07:31.790 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:31.790 associated memzone info: size: 48.002930 MiB name: MP_msgpool_156126_0 00:07:31.790 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:31.790 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:31.790 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:31.790 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:31.790 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:31.790 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_156126 00:07:31.790 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:31.790 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_156126 00:07:31.790 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:31.790 associated memzone info: size: 1.007996 MiB name: MP_evtpool_156126 00:07:31.790 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:31.790 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:31.790 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:31.790 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:31.790 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:31.790 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:31.790 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:31.790 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:31.790 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:31.790 associated memzone info: size: 1.000366 MiB name: RG_ring_0_156126 00:07:31.790 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:31.790 associated memzone info: size: 1.000366 MiB name: RG_ring_1_156126 00:07:31.790 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:31.790 associated memzone info: size: 1.000366 MiB name: RG_ring_4_156126 00:07:31.790 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:31.790 associated memzone info: size: 1.000366 MiB name: RG_ring_5_156126 00:07:31.790 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:31.790 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_156126 00:07:31.790 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:31.790 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:31.790 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:31.790 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:31.790 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:31.790 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:31.790 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:31.790 associated memzone info: size: 0.125366 MiB name: RG_ring_2_156126 00:07:31.790 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:31.790 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:31.790 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:31.790 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:31.790 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:31.790 associated memzone info: size: 0.015991 MiB name: RG_ring_3_156126 00:07:31.790 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:31.790 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:31.790 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:31.790 associated memzone info: size: 0.000183 MiB name: MP_msgpool_156126 00:07:31.790 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:31.790 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_156126 00:07:31.790 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:31.790 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:31.790 15:55:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:31.790 15:55:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 156126 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 156126 ']' 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 156126 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 156126 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 156126' 00:07:31.790 killing process with pid 156126 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 156126 00:07:31.790 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 156126 00:07:32.050 00:07:32.050 real 0m1.385s 00:07:32.050 user 0m1.445s 00:07:32.050 sys 0m0.406s 00:07:32.050 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.050 15:55:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:32.050 ************************************ 00:07:32.050 END TEST dpdk_mem_utility 00:07:32.050 ************************************ 00:07:32.050 15:55:49 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:32.050 15:55:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.050 15:55:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.050 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:07:32.050 ************************************ 00:07:32.050 START TEST event 00:07:32.050 ************************************ 00:07:32.050 15:55:49 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:32.050 * Looking for test storage... 00:07:32.050 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:32.050 15:55:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:32.050 15:55:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:32.050 15:55:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:32.050 15:55:50 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:32.050 15:55:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.050 15:55:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.310 ************************************ 00:07:32.310 START TEST event_perf 00:07:32.310 ************************************ 00:07:32.310 15:55:50 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:32.310 Running I/O for 1 seconds...[2024-07-25 15:55:50.075785] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:32.310 [2024-07-25 15:55:50.075856] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156398 ] 00:07:32.310 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.310 [2024-07-25 15:55:50.146559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.310 [2024-07-25 15:55:50.222740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.310 [2024-07-25 15:55:50.222846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.310 [2024-07-25 15:55:50.222884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.310 [2024-07-25 15:55:50.222885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.689 Running I/O for 1 seconds... 00:07:33.689 lcore 0: 188693 00:07:33.689 lcore 1: 188692 00:07:33.689 lcore 2: 188691 00:07:33.689 lcore 3: 188692 00:07:33.689 done. 00:07:33.689 00:07:33.689 real 0m1.229s 00:07:33.689 user 0m4.134s 00:07:33.689 sys 0m0.092s 00:07:33.689 15:55:51 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.689 15:55:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.689 ************************************ 00:07:33.689 END TEST event_perf 00:07:33.689 ************************************ 00:07:33.689 15:55:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:33.689 15:55:51 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:33.689 15:55:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.689 15:55:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.689 ************************************ 00:07:33.689 START TEST event_reactor 00:07:33.689 ************************************ 00:07:33.689 15:55:51 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:33.689 [2024-07-25 15:55:51.372198] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:33.689 [2024-07-25 15:55:51.372264] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156639 ] 00:07:33.689 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.689 [2024-07-25 15:55:51.445763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.689 [2024-07-25 15:55:51.519438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.627 test_start 00:07:34.627 oneshot 00:07:34.627 tick 100 00:07:34.627 tick 100 00:07:34.627 tick 250 00:07:34.627 tick 100 00:07:34.627 tick 100 00:07:34.627 tick 100 00:07:34.627 tick 250 00:07:34.627 tick 500 00:07:34.627 tick 100 00:07:34.627 tick 100 00:07:34.627 tick 250 00:07:34.627 tick 100 00:07:34.627 tick 100 00:07:34.627 test_end 00:07:34.627 00:07:34.627 real 0m1.227s 00:07:34.627 user 0m1.135s 00:07:34.627 sys 0m0.086s 00:07:34.627 15:55:52 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.627 15:55:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:34.627 ************************************ 00:07:34.627 END TEST event_reactor 00:07:34.627 ************************************ 00:07:34.627 15:55:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:34.627 15:55:52 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:34.627 15:55:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.627 15:55:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.886 ************************************ 00:07:34.886 START TEST event_reactor_perf 00:07:34.886 ************************************ 00:07:34.886 15:55:52 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:34.886 [2024-07-25 15:55:52.665195] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:34.886 [2024-07-25 15:55:52.665264] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156869 ] 00:07:34.886 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.886 [2024-07-25 15:55:52.738481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.886 [2024-07-25 15:55:52.812261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.266 test_start 00:07:36.266 test_end 00:07:36.266 Performance: 915546 events per second 00:07:36.266 00:07:36.266 real 0m1.228s 00:07:36.266 user 0m1.138s 00:07:36.266 sys 0m0.086s 00:07:36.266 15:55:53 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.266 15:55:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.266 ************************************ 00:07:36.266 END TEST event_reactor_perf 00:07:36.266 ************************************ 00:07:36.266 15:55:53 event -- event/event.sh@49 -- # uname -s 00:07:36.266 15:55:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:36.266 15:55:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:36.266 15:55:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.266 15:55:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.266 15:55:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.266 ************************************ 00:07:36.266 START TEST event_scheduler 00:07:36.266 ************************************ 00:07:36.266 15:55:53 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:36.266 * Looking for test storage... 00:07:36.266 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:07:36.266 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:36.266 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=157127 00:07:36.266 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.267 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:36.267 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 157127 00:07:36.267 15:55:54 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 157127 ']' 00:07:36.267 15:55:54 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.267 15:55:54 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.267 15:55:54 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.267 15:55:54 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.267 15:55:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.267 [2024-07-25 15:55:54.058959] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:36.267 [2024-07-25 15:55:54.059047] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157127 ] 00:07:36.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.267 [2024-07-25 15:55:54.131231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.267 [2024-07-25 15:55:54.212152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.267 [2024-07-25 15:55:54.212178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.267 [2024-07-25 15:55:54.212280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.267 [2024-07-25 15:55:54.212281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:37.204 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 [2024-07-25 15:55:54.890752] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:37.204 [2024-07-25 15:55:54.890777] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:37.204 [2024-07-25 15:55:54.890786] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:37.204 [2024-07-25 15:55:54.890792] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:37.204 [2024-07-25 15:55:54.890797] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 [2024-07-25 15:55:54.959589] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.204 15:55:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 ************************************ 00:07:37.204 START TEST scheduler_create_thread 00:07:37.204 ************************************ 00:07:37.204 15:55:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:37.204 15:55:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:37.204 15:55:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 2 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 3 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 4 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 5 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 6 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 7 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 8 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 9 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 10 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 15:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.582 15:55:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.582 15:55:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:38.582 15:55:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:38.582 15:55:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.582 15:55:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.961 15:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.961 00:07:39.961 real 0m2.618s 00:07:39.961 user 0m0.024s 00:07:39.961 sys 0m0.003s 00:07:39.961 15:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.961 15:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.961 ************************************ 00:07:39.961 END TEST scheduler_create_thread 00:07:39.961 ************************************ 00:07:39.961 15:55:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:39.961 15:55:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 157127 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 157127 ']' 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 157127 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 157127 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 157127' 00:07:39.961 killing process with pid 157127 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 157127 00:07:39.961 15:55:57 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 157127 00:07:40.220 [2024-07-25 15:55:58.093916] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:40.480 00:07:40.480 real 0m4.328s 00:07:40.480 user 0m8.208s 00:07:40.480 sys 0m0.368s 00:07:40.480 15:55:58 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.480 15:55:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 ************************************ 00:07:40.480 END TEST event_scheduler 00:07:40.480 ************************************ 00:07:40.480 15:55:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:40.480 15:55:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:40.480 15:55:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.480 15:55:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.480 15:55:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 ************************************ 00:07:40.480 START TEST app_repeat 00:07:40.480 ************************************ 00:07:40.480 15:55:58 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=157915 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 157915' 00:07:40.480 Process app_repeat pid: 157915 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:40.480 spdk_app_start Round 0 00:07:40.480 15:55:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 157915 /var/tmp/spdk-nbd.sock 00:07:40.480 15:55:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 157915 ']' 00:07:40.480 15:55:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:40.480 15:55:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.480 15:55:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:40.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:40.480 15:55:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.480 15:55:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 [2024-07-25 15:55:58.374815] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:40.480 [2024-07-25 15:55:58.374889] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157915 ] 00:07:40.480 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.480 [2024-07-25 15:55:58.446259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.739 [2024-07-25 15:55:58.528407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.739 [2024-07-25 15:55:58.528408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.307 15:55:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.307 15:55:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:41.307 15:55:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.566 Malloc0 00:07:41.566 15:55:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.825 Malloc1 00:07:41.825 15:55:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:41.825 /dev/nbd0 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.825 15:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.825 1+0 records in 00:07:41.825 1+0 records out 00:07:41.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258203 s, 15.9 MB/s 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.825 15:55:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:41.826 15:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.826 15:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.826 15:55:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:42.085 /dev/nbd1 00:07:42.086 15:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:42.086 15:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:42.086 15:55:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:42.086 15:55:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:42.086 15:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.086 15:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.086 15:55:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.086 1+0 records in 00:07:42.086 1+0 records out 00:07:42.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198254 s, 20.7 MB/s 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.086 15:56:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:42.086 15:56:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.086 15:56:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.086 15:56:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.086 15:56:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.086 15:56:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:42.345 { 00:07:42.345 "nbd_device": "/dev/nbd0", 00:07:42.345 "bdev_name": "Malloc0" 00:07:42.345 }, 00:07:42.345 { 00:07:42.345 "nbd_device": "/dev/nbd1", 00:07:42.345 "bdev_name": "Malloc1" 00:07:42.345 } 00:07:42.345 ]' 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:42.345 { 00:07:42.345 "nbd_device": "/dev/nbd0", 00:07:42.345 "bdev_name": "Malloc0" 00:07:42.345 }, 00:07:42.345 { 00:07:42.345 "nbd_device": "/dev/nbd1", 00:07:42.345 "bdev_name": "Malloc1" 00:07:42.345 } 00:07:42.345 ]' 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:42.345 /dev/nbd1' 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:42.345 /dev/nbd1' 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:42.345 15:56:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:42.345 256+0 records in 00:07:42.345 256+0 records out 00:07:42.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104048 s, 101 MB/s 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:42.346 256+0 records in 00:07:42.346 256+0 records out 00:07:42.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152738 s, 68.7 MB/s 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:42.346 256+0 records in 00:07:42.346 256+0 records out 00:07:42.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162176 s, 64.7 MB/s 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.346 15:56:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.605 15:56:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.864 15:56:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.865 15:56:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:43.124 15:56:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:43.124 15:56:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:43.124 15:56:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:43.383 [2024-07-25 15:56:01.280242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.384 [2024-07-25 15:56:01.350905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.384 [2024-07-25 15:56:01.350905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.642 [2024-07-25 15:56:01.390406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:43.642 [2024-07-25 15:56:01.390444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:46.178 15:56:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:46.178 15:56:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:46.178 spdk_app_start Round 1 00:07:46.178 15:56:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 157915 /var/tmp/spdk-nbd.sock 00:07:46.178 15:56:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 157915 ']' 00:07:46.178 15:56:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.178 15:56:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.178 15:56:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.178 15:56:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.178 15:56:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.436 15:56:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.436 15:56:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:46.436 15:56:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:46.695 Malloc0 00:07:46.695 15:56:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:46.695 Malloc1 00:07:46.695 15:56:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:46.695 15:56:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:46.953 /dev/nbd0 00:07:46.953 15:56:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:46.953 15:56:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:46.953 1+0 records in 00:07:46.953 1+0 records out 00:07:46.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001857 s, 22.1 MB/s 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.953 15:56:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:46.953 15:56:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.953 15:56:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:46.953 15:56:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:47.212 /dev/nbd1 00:07:47.212 15:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.212 15:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.212 1+0 records in 00:07:47.212 1+0 records out 00:07:47.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192736 s, 21.3 MB/s 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.212 15:56:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:47.212 15:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.212 15:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.212 15:56:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.212 15:56:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.212 15:56:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:47.471 { 00:07:47.471 "nbd_device": "/dev/nbd0", 00:07:47.471 "bdev_name": "Malloc0" 00:07:47.471 }, 00:07:47.471 { 00:07:47.471 "nbd_device": "/dev/nbd1", 00:07:47.471 "bdev_name": "Malloc1" 00:07:47.471 } 00:07:47.471 ]' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:47.471 { 00:07:47.471 "nbd_device": "/dev/nbd0", 00:07:47.471 "bdev_name": "Malloc0" 00:07:47.471 }, 00:07:47.471 { 00:07:47.471 "nbd_device": "/dev/nbd1", 00:07:47.471 "bdev_name": "Malloc1" 00:07:47.471 } 00:07:47.471 ]' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:47.471 /dev/nbd1' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:47.471 /dev/nbd1' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:47.471 256+0 records in 00:07:47.471 256+0 records out 00:07:47.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102749 s, 102 MB/s 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:47.471 256+0 records in 00:07:47.471 256+0 records out 00:07:47.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145579 s, 72.0 MB/s 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:47.471 256+0 records in 00:07:47.471 256+0 records out 00:07:47.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015455 s, 67.8 MB/s 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.471 15:56:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.730 15:56:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.989 15:56:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:48.248 15:56:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:48.248 15:56:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:48.248 15:56:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:48.506 [2024-07-25 15:56:06.362309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.506 [2024-07-25 15:56:06.432674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.506 [2024-07-25 15:56:06.432675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.506 [2024-07-25 15:56:06.472924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:48.506 [2024-07-25 15:56:06.472964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:51.809 15:56:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:51.809 15:56:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:51.809 spdk_app_start Round 2 00:07:51.809 15:56:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 157915 /var/tmp/spdk-nbd.sock 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 157915 ']' 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:51.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.809 15:56:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:51.809 15:56:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.809 Malloc0 00:07:51.809 15:56:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.809 Malloc1 00:07:51.809 15:56:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.809 15:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:52.070 /dev/nbd0 00:07:52.070 15:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:52.070 15:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:52.070 1+0 records in 00:07:52.070 1+0 records out 00:07:52.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220505 s, 18.6 MB/s 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.070 15:56:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:52.070 15:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.070 15:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.070 15:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:52.329 /dev/nbd1 00:07:52.330 15:56:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:52.330 15:56:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:52.330 1+0 records in 00:07:52.330 1+0 records out 00:07:52.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022397 s, 18.3 MB/s 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.330 15:56:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:52.330 15:56:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.330 15:56:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.330 15:56:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.330 15:56:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.330 15:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:52.589 { 00:07:52.589 "nbd_device": "/dev/nbd0", 00:07:52.589 "bdev_name": "Malloc0" 00:07:52.589 }, 00:07:52.589 { 00:07:52.589 "nbd_device": "/dev/nbd1", 00:07:52.589 "bdev_name": "Malloc1" 00:07:52.589 } 00:07:52.589 ]' 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:52.589 { 00:07:52.589 "nbd_device": "/dev/nbd0", 00:07:52.589 "bdev_name": "Malloc0" 00:07:52.589 }, 00:07:52.589 { 00:07:52.589 "nbd_device": "/dev/nbd1", 00:07:52.589 "bdev_name": "Malloc1" 00:07:52.589 } 00:07:52.589 ]' 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:52.589 /dev/nbd1' 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:52.589 /dev/nbd1' 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:52.589 256+0 records in 00:07:52.589 256+0 records out 00:07:52.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103617 s, 101 MB/s 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:52.589 256+0 records in 00:07:52.589 256+0 records out 00:07:52.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149206 s, 70.3 MB/s 00:07:52.589 15:56:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:52.590 256+0 records in 00:07:52.590 256+0 records out 00:07:52.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161757 s, 64.8 MB/s 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.590 15:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.849 15:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:53.108 15:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:53.108 15:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.108 15:56:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.108 15:56:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.108 15:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:53.108 15:56:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:53.108 15:56:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:53.367 15:56:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:53.626 [2024-07-25 15:56:11.436914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.626 [2024-07-25 15:56:11.507167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.626 [2024-07-25 15:56:11.507168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.626 [2024-07-25 15:56:11.546583] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:53.626 [2024-07-25 15:56:11.546620] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:56.918 15:56:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 157915 /var/tmp/spdk-nbd.sock 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 157915 ']' 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:56.918 15:56:14 event.app_repeat -- event/event.sh@39 -- # killprocess 157915 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 157915 ']' 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 157915 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 157915 00:07:56.918 15:56:14 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.919 15:56:14 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.919 15:56:14 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 157915' 00:07:56.919 killing process with pid 157915 00:07:56.919 15:56:14 event.app_repeat -- common/autotest_common.sh@969 -- # kill 157915 00:07:56.919 15:56:14 event.app_repeat -- common/autotest_common.sh@974 -- # wait 157915 00:07:56.919 spdk_app_start is called in Round 0. 00:07:56.919 Shutdown signal received, stop current app iteration 00:07:56.919 Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 reinitialization... 00:07:56.919 spdk_app_start is called in Round 1. 00:07:56.919 Shutdown signal received, stop current app iteration 00:07:56.919 Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 reinitialization... 00:07:56.919 spdk_app_start is called in Round 2. 00:07:56.919 Shutdown signal received, stop current app iteration 00:07:56.919 Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 reinitialization... 00:07:56.919 spdk_app_start is called in Round 3. 00:07:56.919 Shutdown signal received, stop current app iteration 00:07:56.919 15:56:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:56.919 15:56:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:56.919 00:07:56.919 real 0m16.308s 00:07:56.919 user 0m35.104s 00:07:56.919 sys 0m2.608s 00:07:56.919 15:56:14 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.919 15:56:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 ************************************ 00:07:56.919 END TEST app_repeat 00:07:56.919 ************************************ 00:07:56.919 15:56:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:56.919 15:56:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:56.919 15:56:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.919 15:56:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.919 15:56:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 ************************************ 00:07:56.919 START TEST cpu_locks 00:07:56.919 ************************************ 00:07:56.919 15:56:14 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:56.919 * Looking for test storage... 00:07:56.919 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:56.919 15:56:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:56.919 15:56:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:56.919 15:56:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:56.919 15:56:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:56.919 15:56:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.919 15:56:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.919 15:56:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 ************************************ 00:07:56.919 START TEST default_locks 00:07:56.919 ************************************ 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=160818 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 160818 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 160818 ']' 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.919 15:56:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 [2024-07-25 15:56:14.869825] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:56.919 [2024-07-25 15:56:14.869888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160818 ] 00:07:56.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.179 [2024-07-25 15:56:14.940590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.179 [2024-07-25 15:56:15.021418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.747 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.747 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:57.747 15:56:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 160818 00:07:57.747 15:56:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 160818 00:07:57.747 15:56:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.006 lslocks: write error 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 160818 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 160818 ']' 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 160818 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 160818 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 160818' 00:07:58.006 killing process with pid 160818 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 160818 00:07:58.006 15:56:15 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 160818 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 160818 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 160818 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 160818 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 160818 ']' 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.265 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (160818) - No such process 00:07:58.265 ERROR: process (pid: 160818) is no longer running 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:58.265 00:07:58.265 real 0m1.306s 00:07:58.265 user 0m1.358s 00:07:58.265 sys 0m0.415s 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.265 15:56:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.265 ************************************ 00:07:58.265 END TEST default_locks 00:07:58.265 ************************************ 00:07:58.265 15:56:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:58.265 15:56:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.265 15:56:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.265 15:56:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.265 ************************************ 00:07:58.266 START TEST default_locks_via_rpc 00:07:58.266 ************************************ 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=161061 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 161061 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 161061 ']' 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.266 15:56:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.266 [2024-07-25 15:56:16.241624] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:07:58.266 [2024-07-25 15:56:16.241693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161061 ] 00:07:58.524 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.524 [2024-07-25 15:56:16.313148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.524 [2024-07-25 15:56:16.382798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 161061 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 161061 00:07:59.092 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 161061 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 161061 ']' 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 161061 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161061 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161061' 00:07:59.661 killing process with pid 161061 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 161061 00:07:59.661 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 161061 00:08:00.230 00:08:00.230 real 0m1.723s 00:08:00.230 user 0m1.817s 00:08:00.230 sys 0m0.548s 00:08:00.231 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.231 15:56:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.231 ************************************ 00:08:00.231 END TEST default_locks_via_rpc 00:08:00.231 ************************************ 00:08:00.231 15:56:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:00.231 15:56:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.231 15:56:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.231 15:56:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.231 ************************************ 00:08:00.231 START TEST non_locking_app_on_locked_coremask 00:08:00.231 ************************************ 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=161309 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 161309 /var/tmp/spdk.sock 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 161309 ']' 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.231 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.231 [2024-07-25 15:56:18.031855] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:00.231 [2024-07-25 15:56:18.031927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161309 ] 00:08:00.231 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.231 [2024-07-25 15:56:18.102164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.231 [2024-07-25 15:56:18.182738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=161523 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 161523 /var/tmp/spdk2.sock 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 161523 ']' 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.168 15:56:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.168 [2024-07-25 15:56:18.869855] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:01.168 [2024-07-25 15:56:18.869930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161523 ] 00:08:01.168 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.168 [2024-07-25 15:56:18.944904] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.168 [2024-07-25 15:56:18.944927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.168 [2024-07-25 15:56:19.094795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.737 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.737 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:01.737 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 161309 00:08:01.737 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 161309 00:08:01.737 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.996 lslocks: write error 00:08:01.996 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 161309 00:08:01.996 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 161309 ']' 00:08:01.996 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 161309 00:08:01.996 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:01.996 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.996 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161309 00:08:02.254 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.254 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.254 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161309' 00:08:02.254 killing process with pid 161309 00:08:02.254 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 161309 00:08:02.254 15:56:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 161309 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 161523 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 161523 ']' 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 161523 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161523 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161523' 00:08:02.822 killing process with pid 161523 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 161523 00:08:02.822 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 161523 00:08:03.082 00:08:03.082 real 0m2.919s 00:08:03.082 user 0m3.088s 00:08:03.082 sys 0m0.822s 00:08:03.082 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.082 15:56:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.082 ************************************ 00:08:03.082 END TEST non_locking_app_on_locked_coremask 00:08:03.082 ************************************ 00:08:03.082 15:56:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:03.082 15:56:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.082 15:56:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.082 15:56:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.082 ************************************ 00:08:03.082 START TEST locking_app_on_unlocked_coremask 00:08:03.082 ************************************ 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=161783 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 161783 /var/tmp/spdk.sock 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 161783 ']' 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.082 15:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.082 [2024-07-25 15:56:21.014689] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:03.082 [2024-07-25 15:56:21.014754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161783 ] 00:08:03.082 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.341 [2024-07-25 15:56:21.081966] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.341 [2024-07-25 15:56:21.081990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.341 [2024-07-25 15:56:21.162542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=161993 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 161993 /var/tmp/spdk2.sock 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 161993 ']' 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.908 15:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.908 [2024-07-25 15:56:21.848289] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:03.908 [2024-07-25 15:56:21.848331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161993 ] 00:08:03.908 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.166 [2024-07-25 15:56:21.919246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.166 [2024-07-25 15:56:22.069215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.732 15:56:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.732 15:56:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:04.733 15:56:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 161993 00:08:04.733 15:56:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 161993 00:08:04.733 15:56:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:05.299 lslocks: write error 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 161783 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 161783 ']' 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 161783 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161783 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161783' 00:08:05.299 killing process with pid 161783 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 161783 00:08:05.299 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 161783 00:08:05.874 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 161993 00:08:05.874 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 161993 ']' 00:08:05.874 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 161993 00:08:05.874 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:06.132 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.132 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161993 00:08:06.132 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.132 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.132 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161993' 00:08:06.132 killing process with pid 161993 00:08:06.132 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 161993 00:08:06.132 15:56:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 161993 00:08:06.391 00:08:06.391 real 0m3.205s 00:08:06.391 user 0m3.411s 00:08:06.391 sys 0m0.905s 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:06.391 ************************************ 00:08:06.391 END TEST locking_app_on_unlocked_coremask 00:08:06.391 ************************************ 00:08:06.391 15:56:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:06.391 15:56:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.391 15:56:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.391 15:56:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:06.391 ************************************ 00:08:06.391 START TEST locking_app_on_locked_coremask 00:08:06.391 ************************************ 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=162449 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 162449 /var/tmp/spdk.sock 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 162449 ']' 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.391 15:56:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:06.391 [2024-07-25 15:56:24.285211] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:06.391 [2024-07-25 15:56:24.285280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162449 ] 00:08:06.391 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.391 [2024-07-25 15:56:24.353413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.650 [2024-07-25 15:56:24.424463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=162474 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 162474 /var/tmp/spdk2.sock 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 162474 /var/tmp/spdk2.sock 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 162474 /var/tmp/spdk2.sock 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 162474 ']' 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:07.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.218 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.218 [2024-07-25 15:56:25.126831] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:07.218 [2024-07-25 15:56:25.126901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162474 ] 00:08:07.218 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.219 [2024-07-25 15:56:25.203326] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 162449 has claimed it. 00:08:07.219 [2024-07-25 15:56:25.203362] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:08.157 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (162474) - No such process 00:08:08.157 ERROR: process (pid: 162474) is no longer running 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 162449 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 162449 00:08:08.157 15:56:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:08.157 lslocks: write error 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 162449 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 162449 ']' 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 162449 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162449 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162449' 00:08:08.157 killing process with pid 162449 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 162449 00:08:08.157 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 162449 00:08:08.726 00:08:08.726 real 0m2.147s 00:08:08.726 user 0m2.365s 00:08:08.726 sys 0m0.559s 00:08:08.726 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.726 15:56:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.726 ************************************ 00:08:08.726 END TEST locking_app_on_locked_coremask 00:08:08.726 ************************************ 00:08:08.726 15:56:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:08.726 15:56:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.726 15:56:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.726 15:56:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.726 ************************************ 00:08:08.726 START TEST locking_overlapped_coremask 00:08:08.726 ************************************ 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=162725 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 162725 /var/tmp/spdk.sock 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 162725 ']' 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.726 15:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.726 [2024-07-25 15:56:26.503268] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:08.726 [2024-07-25 15:56:26.503337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162725 ] 00:08:08.726 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.726 [2024-07-25 15:56:26.573620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:08.726 [2024-07-25 15:56:26.656479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.726 [2024-07-25 15:56:26.656504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.726 [2024-07-25 15:56:26.656505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=162925 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 162925 /var/tmp/spdk2.sock 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 162925 /var/tmp/spdk2.sock 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 162925 /var/tmp/spdk2.sock 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 162925 ']' 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:09.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.664 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.664 [2024-07-25 15:56:27.351962] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:09.664 [2024-07-25 15:56:27.352036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162925 ] 00:08:09.664 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.664 [2024-07-25 15:56:27.431913] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 162725 has claimed it. 00:08:09.664 [2024-07-25 15:56:27.431942] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:10.232 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (162925) - No such process 00:08:10.232 ERROR: process (pid: 162925) is no longer running 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:10.232 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:10.233 15:56:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 162725 00:08:10.233 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 162725 ']' 00:08:10.233 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 162725 00:08:10.233 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:10.233 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.233 15:56:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162725 00:08:10.233 15:56:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.233 15:56:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.233 15:56:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162725' 00:08:10.233 killing process with pid 162725 00:08:10.233 15:56:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 162725 00:08:10.233 15:56:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 162725 00:08:10.492 00:08:10.492 real 0m1.858s 00:08:10.492 user 0m5.258s 00:08:10.492 sys 0m0.384s 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.492 ************************************ 00:08:10.492 END TEST locking_overlapped_coremask 00:08:10.492 ************************************ 00:08:10.492 15:56:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:10.492 15:56:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.492 15:56:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.492 15:56:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.492 ************************************ 00:08:10.492 START TEST locking_overlapped_coremask_via_rpc 00:08:10.492 ************************************ 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=163165 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 163165 /var/tmp/spdk.sock 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 163165 ']' 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.492 15:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.492 [2024-07-25 15:56:28.426728] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:10.492 [2024-07-25 15:56:28.426801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163165 ] 00:08:10.492 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.751 [2024-07-25 15:56:28.493623] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:10.751 [2024-07-25 15:56:28.493647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.751 [2024-07-25 15:56:28.575165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.751 [2024-07-25 15:56:28.575268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.751 [2024-07-25 15:56:28.575269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=163251 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 163251 /var/tmp/spdk2.sock 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 163251 ']' 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:11.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.319 15:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.319 [2024-07-25 15:56:29.286881] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:11.319 [2024-07-25 15:56:29.286956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163251 ] 00:08:11.578 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.578 [2024-07-25 15:56:29.365639] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:11.578 [2024-07-25 15:56:29.365669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:11.578 [2024-07-25 15:56:29.526460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.578 [2024-07-25 15:56:29.529808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.578 [2024-07-25 15:56:29.529811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.147 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.406 [2024-07-25 15:56:30.142826] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 163165 has claimed it. 00:08:12.406 request: 00:08:12.406 { 00:08:12.406 "method": "framework_enable_cpumask_locks", 00:08:12.406 "req_id": 1 00:08:12.406 } 00:08:12.406 Got JSON-RPC error response 00:08:12.406 response: 00:08:12.406 { 00:08:12.406 "code": -32603, 00:08:12.406 "message": "Failed to claim CPU core: 2" 00:08:12.406 } 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 163165 /var/tmp/spdk.sock 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 163165 ']' 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 163251 /var/tmp/spdk2.sock 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 163251 ']' 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.406 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:12.665 00:08:12.665 real 0m2.105s 00:08:12.665 user 0m0.877s 00:08:12.665 sys 0m0.166s 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.665 15:56:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.665 ************************************ 00:08:12.665 END TEST locking_overlapped_coremask_via_rpc 00:08:12.665 ************************************ 00:08:12.665 15:56:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:12.665 15:56:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 163165 ]] 00:08:12.665 15:56:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 163165 00:08:12.665 15:56:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 163165 ']' 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 163165 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163165 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163165' 00:08:12.666 killing process with pid 163165 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 163165 00:08:12.666 15:56:30 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 163165 00:08:12.924 15:56:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 163251 ]] 00:08:12.924 15:56:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 163251 00:08:12.924 15:56:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 163251 ']' 00:08:12.924 15:56:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 163251 00:08:12.924 15:56:30 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:12.924 15:56:30 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.924 15:56:30 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163251 00:08:13.183 15:56:30 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:13.183 15:56:30 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:13.183 15:56:30 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163251' 00:08:13.183 killing process with pid 163251 00:08:13.183 15:56:30 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 163251 00:08:13.183 15:56:30 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 163251 00:08:13.442 15:56:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:13.442 15:56:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:13.442 15:56:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 163165 ]] 00:08:13.442 15:56:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 163165 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 163165 ']' 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 163165 00:08:13.442 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (163165) - No such process 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 163165 is not found' 00:08:13.442 Process with pid 163165 is not found 00:08:13.442 15:56:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 163251 ]] 00:08:13.442 15:56:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 163251 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 163251 ']' 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 163251 00:08:13.442 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (163251) - No such process 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 163251 is not found' 00:08:13.442 Process with pid 163251 is not found 00:08:13.442 15:56:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:13.442 00:08:13.442 real 0m16.523s 00:08:13.442 user 0m28.638s 00:08:13.442 sys 0m4.687s 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.442 15:56:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.442 ************************************ 00:08:13.442 END TEST cpu_locks 00:08:13.442 ************************************ 00:08:13.442 00:08:13.442 real 0m41.344s 00:08:13.442 user 1m18.560s 00:08:13.442 sys 0m8.258s 00:08:13.442 15:56:31 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.442 15:56:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.442 ************************************ 00:08:13.442 END TEST event 00:08:13.442 ************************************ 00:08:13.442 15:56:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:13.442 15:56:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.442 15:56:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.442 15:56:31 -- common/autotest_common.sh@10 -- # set +x 00:08:13.442 ************************************ 00:08:13.442 START TEST thread 00:08:13.442 ************************************ 00:08:13.442 15:56:31 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:13.442 * Looking for test storage... 00:08:13.702 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:08:13.702 15:56:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:13.702 15:56:31 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:13.702 15:56:31 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.702 15:56:31 thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 ************************************ 00:08:13.702 START TEST thread_poller_perf 00:08:13.702 ************************************ 00:08:13.702 15:56:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:13.702 [2024-07-25 15:56:31.486487] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:13.702 [2024-07-25 15:56:31.486570] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163715 ] 00:08:13.702 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.702 [2024-07-25 15:56:31.557090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.702 [2024-07-25 15:56:31.630847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.702 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:15.076 ====================================== 00:08:15.076 busy:2103645956 (cyc) 00:08:15.076 total_run_count: 843000 00:08:15.076 tsc_hz: 2100000000 (cyc) 00:08:15.076 ====================================== 00:08:15.076 poller_cost: 2495 (cyc), 1188 (nsec) 00:08:15.076 00:08:15.076 real 0m1.227s 00:08:15.076 user 0m1.136s 00:08:15.076 sys 0m0.087s 00:08:15.076 15:56:32 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.076 15:56:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:15.076 ************************************ 00:08:15.076 END TEST thread_poller_perf 00:08:15.076 ************************************ 00:08:15.076 15:56:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:15.076 15:56:32 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:15.076 15:56:32 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.076 15:56:32 thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.076 ************************************ 00:08:15.076 START TEST thread_poller_perf 00:08:15.076 ************************************ 00:08:15.076 15:56:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:15.076 [2024-07-25 15:56:32.775402] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:15.076 [2024-07-25 15:56:32.775464] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163950 ] 00:08:15.076 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.076 [2024-07-25 15:56:32.845659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.076 [2024-07-25 15:56:32.920367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.076 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:16.009 ====================================== 00:08:16.009 busy:2100902512 (cyc) 00:08:16.009 total_run_count: 13669000 00:08:16.009 tsc_hz: 2100000000 (cyc) 00:08:16.009 ====================================== 00:08:16.009 poller_cost: 153 (cyc), 72 (nsec) 00:08:16.009 00:08:16.009 real 0m1.220s 00:08:16.009 user 0m1.137s 00:08:16.009 sys 0m0.079s 00:08:16.009 15:56:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.009 15:56:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:16.010 ************************************ 00:08:16.010 END TEST thread_poller_perf 00:08:16.010 ************************************ 00:08:16.268 15:56:34 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:08:16.268 15:56:34 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:16.268 15:56:34 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.268 15:56:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.268 15:56:34 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.268 ************************************ 00:08:16.268 START TEST thread_spdk_lock 00:08:16.268 ************************************ 00:08:16.268 15:56:34 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:16.268 [2024-07-25 15:56:34.067838] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:16.268 [2024-07-25 15:56:34.067933] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164186 ] 00:08:16.268 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.268 [2024-07-25 15:56:34.142215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.268 [2024-07-25 15:56:34.216004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.268 [2024-07-25 15:56:34.216005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.835 [2024-07-25 15:56:34.704026] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:16.835 [2024-07-25 15:56:34.704062] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:16.835 [2024-07-25 15:56:34.704070] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14d5bc0 00:08:16.835 [2024-07-25 15:56:34.704894] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:16.835 [2024-07-25 15:56:34.704998] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:16.835 [2024-07-25 15:56:34.705014] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:16.835 Starting test contend 00:08:16.835 Worker Delay Wait us Hold us Total us 00:08:16.835 0 3 177184 185132 362316 00:08:16.835 1 5 93754 285969 379723 00:08:16.835 PASS test contend 00:08:16.835 Starting test hold_by_poller 00:08:16.835 PASS test hold_by_poller 00:08:16.835 Starting test hold_by_message 00:08:16.835 PASS test hold_by_message 00:08:16.835 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:08:16.835 100014 assertions passed 00:08:16.835 0 assertions failed 00:08:16.835 00:08:16.835 real 0m0.717s 00:08:16.835 user 0m1.109s 00:08:16.835 sys 0m0.093s 00:08:16.835 15:56:34 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.835 15:56:34 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:08:16.835 ************************************ 00:08:16.835 END TEST thread_spdk_lock 00:08:16.835 ************************************ 00:08:16.835 00:08:16.835 real 0m3.451s 00:08:16.835 user 0m3.496s 00:08:16.835 sys 0m0.453s 00:08:16.835 15:56:34 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.835 15:56:34 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.835 ************************************ 00:08:16.835 END TEST thread 00:08:16.835 ************************************ 00:08:17.093 15:56:34 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:17.093 15:56:34 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:17.093 15:56:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.093 15:56:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.093 15:56:34 -- common/autotest_common.sh@10 -- # set +x 00:08:17.093 ************************************ 00:08:17.093 START TEST app_cmdline 00:08:17.093 ************************************ 00:08:17.093 15:56:34 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:17.093 * Looking for test storage... 00:08:17.093 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:17.093 15:56:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:17.093 15:56:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=164456 00:08:17.093 15:56:34 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:17.093 15:56:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 164456 00:08:17.093 15:56:34 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 164456 ']' 00:08:17.093 15:56:34 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.093 15:56:34 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.093 15:56:34 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.093 15:56:34 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.093 15:56:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:17.093 [2024-07-25 15:56:34.981570] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:17.093 [2024-07-25 15:56:34.981652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164456 ] 00:08:17.093 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.093 [2024-07-25 15:56:35.051312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.351 [2024-07-25 15:56:35.127710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.918 15:56:35 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.918 15:56:35 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:17.918 15:56:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:18.177 { 00:08:18.177 "version": "SPDK v24.09-pre git sha1 5efb3b7d9", 00:08:18.177 "fields": { 00:08:18.177 "major": 24, 00:08:18.177 "minor": 9, 00:08:18.177 "patch": 0, 00:08:18.177 "suffix": "-pre", 00:08:18.177 "commit": "5efb3b7d9" 00:08:18.177 } 00:08:18.177 } 00:08:18.177 15:56:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:18.177 15:56:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:18.177 15:56:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:18.177 15:56:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:18.177 15:56:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:18.177 15:56:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:18.177 15:56:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:18.177 15:56:35 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.177 15:56:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.177 15:56:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:18.177 15:56:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:18.177 15:56:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:08:18.177 15:56:36 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.436 request: 00:08:18.436 { 00:08:18.436 "method": "env_dpdk_get_mem_stats", 00:08:18.436 "req_id": 1 00:08:18.436 } 00:08:18.436 Got JSON-RPC error response 00:08:18.436 response: 00:08:18.436 { 00:08:18.436 "code": -32601, 00:08:18.436 "message": "Method not found" 00:08:18.436 } 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.436 15:56:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 164456 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 164456 ']' 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 164456 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 164456 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 164456' 00:08:18.436 killing process with pid 164456 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 164456 00:08:18.436 15:56:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 164456 00:08:18.696 00:08:18.696 real 0m1.667s 00:08:18.696 user 0m2.004s 00:08:18.696 sys 0m0.422s 00:08:18.696 15:56:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.696 15:56:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:18.696 ************************************ 00:08:18.696 END TEST app_cmdline 00:08:18.696 ************************************ 00:08:18.696 15:56:36 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:18.696 15:56:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.696 15:56:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.696 15:56:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.696 ************************************ 00:08:18.696 START TEST version 00:08:18.696 ************************************ 00:08:18.696 15:56:36 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:18.696 * Looking for test storage... 00:08:18.955 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:18.955 15:56:36 version -- app/version.sh@17 -- # get_header_version major 00:08:18.955 15:56:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:18.955 15:56:36 version -- app/version.sh@14 -- # cut -f2 00:08:18.955 15:56:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.955 15:56:36 version -- app/version.sh@17 -- # major=24 00:08:18.955 15:56:36 version -- app/version.sh@18 -- # get_header_version minor 00:08:18.956 15:56:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:18.956 15:56:36 version -- app/version.sh@14 -- # cut -f2 00:08:18.956 15:56:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.956 15:56:36 version -- app/version.sh@18 -- # minor=9 00:08:18.956 15:56:36 version -- app/version.sh@19 -- # get_header_version patch 00:08:18.956 15:56:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:18.956 15:56:36 version -- app/version.sh@14 -- # cut -f2 00:08:18.956 15:56:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.956 15:56:36 version -- app/version.sh@19 -- # patch=0 00:08:18.956 15:56:36 version -- app/version.sh@20 -- # get_header_version suffix 00:08:18.956 15:56:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:18.956 15:56:36 version -- app/version.sh@14 -- # cut -f2 00:08:18.956 15:56:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:18.956 15:56:36 version -- app/version.sh@20 -- # suffix=-pre 00:08:18.956 15:56:36 version -- app/version.sh@22 -- # version=24.9 00:08:18.956 15:56:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:18.956 15:56:36 version -- app/version.sh@28 -- # version=24.9rc0 00:08:18.956 15:56:36 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:18.956 15:56:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:18.956 15:56:36 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:18.956 15:56:36 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:18.956 00:08:18.956 real 0m0.154s 00:08:18.956 user 0m0.082s 00:08:18.956 sys 0m0.108s 00:08:18.956 15:56:36 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.956 15:56:36 version -- common/autotest_common.sh@10 -- # set +x 00:08:18.956 ************************************ 00:08:18.956 END TEST version 00:08:18.956 ************************************ 00:08:18.956 15:56:36 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@202 -- # uname -s 00:08:18.956 15:56:36 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:18.956 15:56:36 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:18.956 15:56:36 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:18.956 15:56:36 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:18.956 15:56:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:18.956 15:56:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.956 15:56:36 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:08:18.956 15:56:36 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:08:18.956 15:56:36 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:08:18.956 15:56:36 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:08:18.956 15:56:36 -- spdk/autotest.sh@376 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:18.956 15:56:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.956 15:56:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.956 15:56:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.956 ************************************ 00:08:18.956 START TEST llvm_fuzz 00:08:18.956 ************************************ 00:08:18.956 15:56:36 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:19.217 * Looking for test storage... 00:08:19.217 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:19.217 15:56:36 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.217 15:56:36 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:19.217 ************************************ 00:08:19.217 START TEST nvmf_llvm_fuzz 00:08:19.217 ************************************ 00:08:19.217 15:56:36 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:19.217 * Looking for test storage... 00:08:19.217 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:19.217 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:19.218 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:19.218 #define SPDK_CONFIG_H 00:08:19.218 #define SPDK_CONFIG_APPS 1 00:08:19.218 #define SPDK_CONFIG_ARCH native 00:08:19.219 #undef SPDK_CONFIG_ASAN 00:08:19.219 #undef SPDK_CONFIG_AVAHI 00:08:19.219 #undef SPDK_CONFIG_CET 00:08:19.219 #define SPDK_CONFIG_COVERAGE 1 00:08:19.219 #define SPDK_CONFIG_CROSS_PREFIX 00:08:19.219 #undef SPDK_CONFIG_CRYPTO 00:08:19.219 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:19.219 #undef SPDK_CONFIG_CUSTOMOCF 00:08:19.219 #undef SPDK_CONFIG_DAOS 00:08:19.219 #define SPDK_CONFIG_DAOS_DIR 00:08:19.219 #define SPDK_CONFIG_DEBUG 1 00:08:19.219 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:19.219 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:19.219 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:19.219 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:19.219 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:19.219 #undef SPDK_CONFIG_DPDK_UADK 00:08:19.219 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:19.219 #define SPDK_CONFIG_EXAMPLES 1 00:08:19.219 #undef SPDK_CONFIG_FC 00:08:19.219 #define SPDK_CONFIG_FC_PATH 00:08:19.219 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:19.219 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:19.219 #undef SPDK_CONFIG_FUSE 00:08:19.219 #define SPDK_CONFIG_FUZZER 1 00:08:19.219 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:19.219 #undef SPDK_CONFIG_GOLANG 00:08:19.219 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:19.219 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:19.219 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:19.219 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:19.219 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:19.219 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:19.219 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:19.219 #define SPDK_CONFIG_IDXD 1 00:08:19.219 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:19.219 #undef SPDK_CONFIG_IPSEC_MB 00:08:19.219 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:19.219 #define SPDK_CONFIG_ISAL 1 00:08:19.219 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:19.219 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:19.219 #define SPDK_CONFIG_LIBDIR 00:08:19.219 #undef SPDK_CONFIG_LTO 00:08:19.219 #define SPDK_CONFIG_MAX_LCORES 128 00:08:19.219 #define SPDK_CONFIG_NVME_CUSE 1 00:08:19.219 #undef SPDK_CONFIG_OCF 00:08:19.219 #define SPDK_CONFIG_OCF_PATH 00:08:19.219 #define SPDK_CONFIG_OPENSSL_PATH 00:08:19.219 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:19.219 #define SPDK_CONFIG_PGO_DIR 00:08:19.219 #undef SPDK_CONFIG_PGO_USE 00:08:19.219 #define SPDK_CONFIG_PREFIX /usr/local 00:08:19.219 #undef SPDK_CONFIG_RAID5F 00:08:19.219 #undef SPDK_CONFIG_RBD 00:08:19.219 #define SPDK_CONFIG_RDMA 1 00:08:19.219 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:19.219 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:19.219 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:19.219 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:19.219 #undef SPDK_CONFIG_SHARED 00:08:19.219 #undef SPDK_CONFIG_SMA 00:08:19.219 #define SPDK_CONFIG_TESTS 1 00:08:19.219 #undef SPDK_CONFIG_TSAN 00:08:19.219 #define SPDK_CONFIG_UBLK 1 00:08:19.219 #define SPDK_CONFIG_UBSAN 1 00:08:19.219 #undef SPDK_CONFIG_UNIT_TESTS 00:08:19.219 #undef SPDK_CONFIG_URING 00:08:19.219 #define SPDK_CONFIG_URING_PATH 00:08:19.219 #undef SPDK_CONFIG_URING_ZNS 00:08:19.219 #undef SPDK_CONFIG_USDT 00:08:19.219 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:19.219 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:19.219 #define SPDK_CONFIG_VFIO_USER 1 00:08:19.219 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:19.219 #define SPDK_CONFIG_VHOST 1 00:08:19.219 #define SPDK_CONFIG_VIRTIO 1 00:08:19.219 #undef SPDK_CONFIG_VTUNE 00:08:19.219 #define SPDK_CONFIG_VTUNE_DIR 00:08:19.219 #define SPDK_CONFIG_WERROR 1 00:08:19.219 #define SPDK_CONFIG_WPDK_DIR 00:08:19.219 #undef SPDK_CONFIG_XNVME 00:08:19.219 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:19.219 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.220 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j88 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 164835 ]] 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 164835 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.afN3Hx 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.afN3Hx/tests/nvmf /tmp/spdk.afN3Hx 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:08:19.221 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=948682752 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4335747072 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=54613336064 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742047232 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=7128711168 00:08:19.481 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30867648512 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342370304 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348411904 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=6041600 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870716416 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=307200 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174199808 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174203904 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:08:19.482 * Looking for test storage... 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=54613336064 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=9343303680 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:19.482 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:19.482 15:56:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:08:19.482 [2024-07-25 15:56:37.280546] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:19.482 [2024-07-25 15:56:37.280628] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164894 ] 00:08:19.482 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.482 [2024-07-25 15:56:37.455446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.741 [2024-07-25 15:56:37.521635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.741 [2024-07-25 15:56:37.580288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.741 [2024-07-25 15:56:37.596517] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:19.741 INFO: Running with entropic power schedule (0xFF, 100). 00:08:19.741 INFO: Seed: 2041504022 00:08:19.741 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:19.741 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:19.741 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:19.741 INFO: A corpus is not provided, starting from an empty corpus 00:08:19.741 #2 INITED exec/s: 0 rss: 64Mb 00:08:19.741 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:19.741 This may also happen if the target rejected all inputs we tried so far 00:08:19.741 [2024-07-25 15:56:37.645280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:19.741 [2024-07-25 15:56:37.645309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.000 NEW_FUNC[1/699]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:20.000 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.000 #13 NEW cov: 11942 ft: 11939 corp: 2/109b lim: 320 exec/s: 0 rss: 71Mb L: 108/108 MS: 1 InsertRepeatedBytes- 00:08:20.000 [2024-07-25 15:56:37.826236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.000 [2024-07-25 15:56:37.826286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.000 #14 NEW cov: 12055 ft: 12633 corp: 3/216b lim: 320 exec/s: 0 rss: 71Mb L: 107/108 MS: 1 EraseBytes- 00:08:20.000 [2024-07-25 15:56:37.885746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.000 [2024-07-25 15:56:37.885773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.000 #15 NEW cov: 12061 ft: 12856 corp: 4/324b lim: 320 exec/s: 0 rss: 71Mb L: 108/108 MS: 1 ChangeByte- 00:08:20.000 [2024-07-25 15:56:37.925902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.000 [2024-07-25 15:56:37.925924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.000 #21 NEW cov: 12146 ft: 13070 corp: 5/432b lim: 320 exec/s: 0 rss: 71Mb L: 108/108 MS: 1 ShuffleBytes- 00:08:20.000 [2024-07-25 15:56:37.966007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.000 [2024-07-25 15:56:37.966030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.260 #22 NEW cov: 12146 ft: 13185 corp: 6/541b lim: 320 exec/s: 0 rss: 71Mb L: 109/109 MS: 1 CrossOver- 00:08:20.260 [2024-07-25 15:56:38.016313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:72000000 cdw11:72727272 00:08:20.260 [2024-07-25 15:56:38.016335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.260 [2024-07-25 15:56:38.016393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7272 00:08:20.260 [2024-07-25 15:56:38.016407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.260 NEW_FUNC[1/1]: 0x17cb600 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:08:20.260 #23 NEW cov: 12170 ft: 13464 corp: 7/696b lim: 320 exec/s: 0 rss: 72Mb L: 155/155 MS: 1 InsertRepeatedBytes- 00:08:20.260 [2024-07-25 15:56:38.066439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:72000000 cdw11:72727272 00:08:20.260 [2024-07-25 15:56:38.066461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.260 [2024-07-25 15:56:38.066517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7272 00:08:20.260 [2024-07-25 15:56:38.066527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.260 #24 NEW cov: 12170 ft: 13516 corp: 8/851b lim: 320 exec/s: 0 rss: 72Mb L: 155/155 MS: 1 ShuffleBytes- 00:08:20.260 [2024-07-25 15:56:38.116526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:72000000 cdw11:72727272 00:08:20.260 [2024-07-25 15:56:38.116548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.260 [2024-07-25 15:56:38.116605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (23) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x727272 00:08:20.260 [2024-07-25 15:56:38.116617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.260 NEW_FUNC[1/1]: 0x139c3a0 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2093 00:08:20.260 #25 NEW cov: 12201 ft: 13568 corp: 9/1007b lim: 320 exec/s: 0 rss: 72Mb L: 156/156 MS: 1 InsertByte- 00:08:20.260 [2024-07-25 15:56:38.156575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.260 [2024-07-25 15:56:38.156601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.260 #26 NEW cov: 12201 ft: 13612 corp: 10/1132b lim: 320 exec/s: 0 rss: 72Mb L: 125/156 MS: 1 InsertRepeatedBytes- 00:08:20.260 [2024-07-25 15:56:38.206685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.260 [2024-07-25 15:56:38.206707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.260 #27 NEW cov: 12201 ft: 13659 corp: 11/1258b lim: 320 exec/s: 0 rss: 72Mb L: 126/156 MS: 1 InsertByte- 00:08:20.519 [2024-07-25 15:56:38.256839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.256861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.519 #28 NEW cov: 12201 ft: 13695 corp: 12/1383b lim: 320 exec/s: 0 rss: 72Mb L: 125/156 MS: 1 ChangeBinInt- 00:08:20.519 [2024-07-25 15:56:38.297054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.297076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.519 [2024-07-25 15:56:38.297142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.297153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.519 #29 NEW cov: 12201 ft: 13763 corp: 13/1564b lim: 320 exec/s: 0 rss: 72Mb L: 181/181 MS: 1 CrossOver- 00:08:20.519 [2024-07-25 15:56:38.337190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:72000000 cdw11:72727272 00:08:20.519 [2024-07-25 15:56:38.337215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.519 [2024-07-25 15:56:38.337287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7272 00:08:20.519 [2024-07-25 15:56:38.337299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.519 #30 NEW cov: 12201 ft: 13776 corp: 14/1719b lim: 320 exec/s: 0 rss: 72Mb L: 155/181 MS: 1 CrossOver- 00:08:20.519 [2024-07-25 15:56:38.377179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.377201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.519 #31 NEW cov: 12201 ft: 13793 corp: 15/1824b lim: 320 exec/s: 0 rss: 72Mb L: 105/181 MS: 1 EraseBytes- 00:08:20.519 [2024-07-25 15:56:38.417408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:72727200 00:08:20.519 [2024-07-25 15:56:38.417430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.519 [2024-07-25 15:56:38.417486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x72727272 00:08:20.519 [2024-07-25 15:56:38.417498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.519 #32 NEW cov: 12201 ft: 13813 corp: 16/1981b lim: 320 exec/s: 0 rss: 72Mb L: 157/181 MS: 1 CMP- DE: "\377\017"- 00:08:20.519 [2024-07-25 15:56:38.457529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.457552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.519 [2024-07-25 15:56:38.457611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7272727272727272 00:08:20.519 [2024-07-25 15:56:38.457622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.519 #33 NEW cov: 12201 ft: 13821 corp: 17/2126b lim: 320 exec/s: 0 rss: 72Mb L: 145/181 MS: 1 CrossOver- 00:08:20.519 [2024-07-25 15:56:38.507825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.507848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.519 [2024-07-25 15:56:38.507898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.507909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.519 [2024-07-25 15:56:38.507956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.519 [2024-07-25 15:56:38.507966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.778 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:20.778 #39 NEW cov: 12224 ft: 14091 corp: 18/2375b lim: 320 exec/s: 0 rss: 72Mb L: 249/249 MS: 1 InsertRepeatedBytes- 00:08:20.778 [2024-07-25 15:56:38.567764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.567788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.778 #45 NEW cov: 12224 ft: 14104 corp: 19/2479b lim: 320 exec/s: 0 rss: 72Mb L: 104/249 MS: 1 EraseBytes- 00:08:20.778 [2024-07-25 15:56:38.607847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.607870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.778 #46 NEW cov: 12224 ft: 14118 corp: 20/2588b lim: 320 exec/s: 0 rss: 72Mb L: 109/249 MS: 1 InsertByte- 00:08:20.778 [2024-07-25 15:56:38.648108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.648130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.778 [2024-07-25 15:56:38.648180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a 00:08:20.778 [2024-07-25 15:56:38.648190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.778 #52 NEW cov: 12224 ft: 14158 corp: 21/2723b lim: 320 exec/s: 52 rss: 72Mb L: 135/249 MS: 1 CopyPart- 00:08:20.778 [2024-07-25 15:56:38.698245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:72720000 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.698269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.778 [2024-07-25 15:56:38.698318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.698329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.778 #53 NEW cov: 12224 ft: 14181 corp: 22/2865b lim: 320 exec/s: 53 rss: 72Mb L: 142/249 MS: 1 CrossOver- 00:08:20.778 [2024-07-25 15:56:38.748495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.748519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.778 [2024-07-25 15:56:38.748568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.748579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.778 [2024-07-25 15:56:38.748628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.778 [2024-07-25 15:56:38.748638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.038 #59 NEW cov: 12224 ft: 14209 corp: 23/3089b lim: 320 exec/s: 59 rss: 72Mb L: 224/249 MS: 1 CrossOver- 00:08:21.038 [2024-07-25 15:56:38.798406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:38.798429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.038 #60 NEW cov: 12224 ft: 14214 corp: 24/3214b lim: 320 exec/s: 60 rss: 72Mb L: 125/249 MS: 1 ChangeBit- 00:08:21.038 [2024-07-25 15:56:38.838639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:38.838662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.038 [2024-07-25 15:56:38.838713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:72727272 cdw10:72727272 cdw11:72727272 00:08:21.038 [2024-07-25 15:56:38.838723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.038 #61 NEW cov: 12224 ft: 14230 corp: 25/3369b lim: 320 exec/s: 61 rss: 72Mb L: 155/249 MS: 1 InsertRepeatedBytes- 00:08:21.038 [2024-07-25 15:56:38.888926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:38.888949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.038 [2024-07-25 15:56:38.889026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000fff 00:08:21.038 [2024-07-25 15:56:38.889037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.038 [2024-07-25 15:56:38.889086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:38.889096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.038 #62 NEW cov: 12224 ft: 14231 corp: 26/3620b lim: 320 exec/s: 62 rss: 72Mb L: 251/251 MS: 1 PersAutoDict- DE: "\377\017"- 00:08:21.038 [2024-07-25 15:56:38.938796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:38.938817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.038 #64 NEW cov: 12224 ft: 14322 corp: 27/3713b lim: 320 exec/s: 64 rss: 72Mb L: 93/251 MS: 2 EraseBytes-CrossOver- 00:08:21.038 [2024-07-25 15:56:38.979025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:f5 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:38.979048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.038 [2024-07-25 15:56:38.979099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:38.979110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.038 #65 NEW cov: 12224 ft: 14405 corp: 28/3842b lim: 320 exec/s: 65 rss: 72Mb L: 129/251 MS: 1 CMP- DE: "\376\377\377\365"- 00:08:21.038 [2024-07-25 15:56:39.019169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.038 [2024-07-25 15:56:39.019190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.038 [2024-07-25 15:56:39.019265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7272727272727272 00:08:21.038 [2024-07-25 15:56:39.019277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.298 #66 NEW cov: 12224 ft: 14448 corp: 29/3987b lim: 320 exec/s: 66 rss: 72Mb L: 145/251 MS: 1 ChangeBinInt- 00:08:21.298 [2024-07-25 15:56:39.059115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.059137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.298 #67 NEW cov: 12224 ft: 14461 corp: 30/4092b lim: 320 exec/s: 67 rss: 72Mb L: 105/251 MS: 1 ChangeBit- 00:08:21.298 [2024-07-25 15:56:39.099212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.099235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.298 #68 NEW cov: 12224 ft: 14475 corp: 31/4201b lim: 320 exec/s: 68 rss: 72Mb L: 109/251 MS: 1 ChangeBinInt- 00:08:21.298 [2024-07-25 15:56:39.139357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.139382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.298 #69 NEW cov: 12224 ft: 14483 corp: 32/4294b lim: 320 exec/s: 69 rss: 72Mb L: 93/251 MS: 1 ChangeBit- 00:08:21.298 [2024-07-25 15:56:39.189754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.189781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.298 [2024-07-25 15:56:39.189847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.189859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.298 [2024-07-25 15:56:39.189909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.189920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.298 #70 NEW cov: 12224 ft: 14502 corp: 33/4543b lim: 320 exec/s: 70 rss: 72Mb L: 249/251 MS: 1 CrossOver- 00:08:21.298 [2024-07-25 15:56:39.229722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:f5 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.229744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.298 [2024-07-25 15:56:39.229812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.229824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.298 #71 NEW cov: 12224 ft: 14505 corp: 34/4672b lim: 320 exec/s: 71 rss: 73Mb L: 129/251 MS: 1 ShuffleBytes- 00:08:21.298 [2024-07-25 15:56:39.279895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:72720000 cdw10:00000000 cdw11:00000000 00:08:21.298 [2024-07-25 15:56:39.279922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.299 [2024-07-25 15:56:39.279985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.299 [2024-07-25 15:56:39.279996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.558 #72 NEW cov: 12224 ft: 14576 corp: 35/4814b lim: 320 exec/s: 72 rss: 73Mb L: 142/251 MS: 1 ChangeByte- 00:08:21.558 [2024-07-25 15:56:39.329913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:f5 cdw10:00000000 cdw11:00000000 00:08:21.558 [2024-07-25 15:56:39.329934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.558 [2024-07-25 15:56:39.329999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.558 [2024-07-25 15:56:39.330009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.558 #73 NEW cov: 12224 ft: 14588 corp: 36/4943b lim: 320 exec/s: 73 rss: 73Mb L: 129/251 MS: 1 CopyPart- 00:08:21.558 [2024-07-25 15:56:39.370105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:72000000 cdw11:72727272 00:08:21.558 [2024-07-25 15:56:39.370126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.558 [2024-07-25 15:56:39.370207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (23) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x727272 00:08:21.558 [2024-07-25 15:56:39.370218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.558 #74 NEW cov: 12224 ft: 14589 corp: 37/5099b lim: 320 exec/s: 74 rss: 73Mb L: 156/251 MS: 1 ShuffleBytes- 00:08:21.558 [2024-07-25 15:56:39.420259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.558 [2024-07-25 15:56:39.420281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.558 [2024-07-25 15:56:39.420337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7272727272727272 00:08:21.558 [2024-07-25 15:56:39.420348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.558 #75 NEW cov: 12224 ft: 14591 corp: 38/5244b lim: 320 exec/s: 75 rss: 73Mb L: 145/251 MS: 1 ShuffleBytes- 00:08:21.558 [2024-07-25 15:56:39.460255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.558 [2024-07-25 15:56:39.460276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.558 #76 NEW cov: 12224 ft: 14661 corp: 39/5325b lim: 320 exec/s: 76 rss: 73Mb L: 81/251 MS: 1 EraseBytes- 00:08:21.558 [2024-07-25 15:56:39.510562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.558 [2024-07-25 15:56:39.510584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.558 [2024-07-25 15:56:39.510639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (72) qid:0 cid:5 nsid:72727272 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7272727272727272 00:08:21.558 [2024-07-25 15:56:39.510651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.558 #77 NEW cov: 12224 ft: 14722 corp: 40/5470b lim: 320 exec/s: 77 rss: 73Mb L: 145/251 MS: 1 CopyPart- 00:08:21.818 [2024-07-25 15:56:39.550531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.818 [2024-07-25 15:56:39.550553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.818 #78 NEW cov: 12224 ft: 14766 corp: 41/5580b lim: 320 exec/s: 78 rss: 73Mb L: 110/251 MS: 1 InsertByte- 00:08:21.818 [2024-07-25 15:56:39.590755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:72720000 cdw10:00000000 cdw11:00000000 00:08:21.818 [2024-07-25 15:56:39.590782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.818 [2024-07-25 15:56:39.590829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.818 [2024-07-25 15:56:39.590840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.818 #79 NEW cov: 12224 ft: 14826 corp: 42/5722b lim: 320 exec/s: 79 rss: 73Mb L: 142/251 MS: 1 ChangeBit- 00:08:21.818 [2024-07-25 15:56:39.641010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.818 [2024-07-25 15:56:39.641032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.818 [2024-07-25 15:56:39.641097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.818 [2024-07-25 15:56:39.641107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.818 [2024-07-25 15:56:39.641156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.818 [2024-07-25 15:56:39.641166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.818 #80 NEW cov: 12224 ft: 14871 corp: 43/5946b lim: 320 exec/s: 40 rss: 73Mb L: 224/251 MS: 1 ChangeBinInt- 00:08:21.818 #80 DONE cov: 12224 ft: 14871 corp: 43/5946b lim: 320 exec/s: 40 rss: 73Mb 00:08:21.818 ###### Recommended dictionary. ###### 00:08:21.818 "\377\017" # Uses: 1 00:08:21.818 "\376\377\377\365" # Uses: 0 00:08:21.818 ###### End of recommended dictionary. ###### 00:08:21.818 Done 80 runs in 2 second(s) 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:21.818 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:22.077 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.077 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:22.077 15:56:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:08:22.077 [2024-07-25 15:56:39.834190] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:22.077 [2024-07-25 15:56:39.834270] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165310 ] 00:08:22.077 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.077 [2024-07-25 15:56:40.010319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.337 [2024-07-25 15:56:40.083500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.337 [2024-07-25 15:56:40.142446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.337 [2024-07-25 15:56:40.158666] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:22.337 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.337 INFO: Seed: 309552042 00:08:22.337 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:22.337 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:22.337 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:22.337 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.337 #2 INITED exec/s: 0 rss: 63Mb 00:08:22.337 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.337 This may also happen if the target rejected all inputs we tried so far 00:08:22.337 [2024-07-25 15:56:40.225069] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:22.337 [2024-07-25 15:56:40.225583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.337 [2024-07-25 15:56:40.225620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.596 NEW_FUNC[1/701]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:22.596 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:22.596 #4 NEW cov: 12028 ft: 12022 corp: 2/10b lim: 30 exec/s: 0 rss: 71Mb L: 9/9 MS: 2 ShuffleBytes-CMP- DE: "\253+\2642U\331\027\000"- 00:08:22.596 [2024-07-25 15:56:40.385840] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000055d9 00:08:22.596 [2024-07-25 15:56:40.386363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.596 [2024-07-25 15:56:40.386405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.596 #5 NEW cov: 12155 ft: 12582 corp: 3/19b lim: 30 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 PersAutoDict- DE: "\253+\2642U\331\027\000"- 00:08:22.596 [2024-07-25 15:56:40.446128] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000055d9 00:08:22.596 [2024-07-25 15:56:40.446623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.596 [2024-07-25 15:56:40.446653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.596 #6 NEW cov: 12161 ft: 12799 corp: 4/27b lim: 30 exec/s: 0 rss: 71Mb L: 8/9 MS: 1 EraseBytes- 00:08:22.596 [2024-07-25 15:56:40.506433] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000055d9 00:08:22.596 [2024-07-25 15:56:40.506929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.596 [2024-07-25 15:56:40.506954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.596 #7 NEW cov: 12246 ft: 13146 corp: 5/36b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 PersAutoDict- DE: "\253+\2642U\331\027\000"- 00:08:22.596 [2024-07-25 15:56:40.556896] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:22.596 [2024-07-25 15:56:40.557190] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484448) > buf size (4096) 00:08:22.596 [2024-07-25 15:56:40.557675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.596 [2024-07-25 15:56:40.557699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.596 [2024-07-25 15:56:40.557810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9178155 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.596 [2024-07-25 15:56:40.557824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.596 #8 NEW cov: 12269 ft: 13598 corp: 6/49b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CopyPart- 00:08:22.856 [2024-07-25 15:56:40.607317] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:22.856 [2024-07-25 15:56:40.607604] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484448) > buf size (4096) 00:08:22.856 [2024-07-25 15:56:40.608110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.608138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.856 [2024-07-25 15:56:40.608236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9178155 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.608252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.856 #9 NEW cov: 12269 ft: 13693 corp: 7/62b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CopyPart- 00:08:22.856 [2024-07-25 15:56:40.667667] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000055d9 00:08:22.856 [2024-07-25 15:56:40.667968] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:08:22.856 [2024-07-25 15:56:40.668484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.668510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.856 [2024-07-25 15:56:40.668603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:17170200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.668614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.856 #10 NEW cov: 12269 ft: 13760 corp: 8/74b lim: 30 exec/s: 0 rss: 72Mb L: 12/13 MS: 1 CopyPart- 00:08:22.856 [2024-07-25 15:56:40.738166] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:22.856 [2024-07-25 15:56:40.738461] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484696) > buf size (4096) 00:08:22.856 [2024-07-25 15:56:40.738992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.739018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.856 [2024-07-25 15:56:40.739110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d95581d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.739127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.856 #11 NEW cov: 12269 ft: 13782 corp: 9/87b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CopyPart- 00:08:22.856 [2024-07-25 15:56:40.788335] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:22.856 [2024-07-25 15:56:40.788608] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484448) > buf size (4096) 00:08:22.856 [2024-07-25 15:56:40.789111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.789136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.856 [2024-07-25 15:56:40.789236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9178155 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.789250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.856 #12 NEW cov: 12269 ft: 13797 corp: 10/100b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:22.856 [2024-07-25 15:56:40.838813] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:22.856 [2024-07-25 15:56:40.839091] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484448) > buf size (4096) 00:08:22.856 [2024-07-25 15:56:40.839587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.839618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.856 [2024-07-25 15:56:40.839711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9178155 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.856 [2024-07-25 15:56:40.839726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.115 #13 NEW cov: 12269 ft: 13895 corp: 11/113b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeByte- 00:08:23.115 [2024-07-25 15:56:40.908942] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000032ab 00:08:23.115 [2024-07-25 15:56:40.909439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2b5581b4 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.115 [2024-07-25 15:56:40.909467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.115 #14 NEW cov: 12269 ft: 13933 corp: 12/122b lim: 30 exec/s: 0 rss: 72Mb L: 9/13 MS: 1 ShuffleBytes- 00:08:23.115 [2024-07-25 15:56:40.959252] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:23.115 [2024-07-25 15:56:40.959554] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484696) > buf size (4096) 00:08:23.115 [2024-07-25 15:56:40.960070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.115 [2024-07-25 15:56:40.960096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.115 [2024-07-25 15:56:40.960187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d95581d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.115 [2024-07-25 15:56:40.960203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.116 #15 NEW cov: 12269 ft: 13961 corp: 13/135b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:23.116 [2024-07-25 15:56:41.029516] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003255 00:08:23.116 [2024-07-25 15:56:41.029829] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484448) > buf size (4096) 00:08:23.116 [2024-07-25 15:56:41.030329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab832b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.116 [2024-07-25 15:56:41.030354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.116 [2024-07-25 15:56:41.030456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9178155 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.116 [2024-07-25 15:56:41.030470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.116 #16 NEW cov: 12269 ft: 14002 corp: 14/148b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeByte- 00:08:23.116 [2024-07-25 15:56:41.079983] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:23.116 [2024-07-25 15:56:41.080281] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484696) > buf size (4096) 00:08:23.116 [2024-07-25 15:56:41.080772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.116 [2024-07-25 15:56:41.080798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.116 [2024-07-25 15:56:41.080895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9558124 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.116 [2024-07-25 15:56:41.080909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.116 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:23.116 #17 NEW cov: 12292 ft: 14047 corp: 15/161b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeByte- 00:08:23.375 [2024-07-25 15:56:41.130136] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000032ab 00:08:23.375 [2024-07-25 15:56:41.130424] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (306900) > buf size (4096) 00:08:23.375 [2024-07-25 15:56:41.130911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2b5581b4 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.375 [2024-07-25 15:56:41.130938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.375 [2024-07-25 15:56:41.131034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2bb48132 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.375 [2024-07-25 15:56:41.131049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.375 #18 NEW cov: 12292 ft: 14123 corp: 16/174b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CrossOver- 00:08:23.375 [2024-07-25 15:56:41.190252] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000024d9 00:08:23.375 [2024-07-25 15:56:41.190756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5581d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.375 [2024-07-25 15:56:41.190786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.375 #19 NEW cov: 12292 ft: 14132 corp: 17/183b lim: 30 exec/s: 19 rss: 72Mb L: 9/13 MS: 1 EraseBytes- 00:08:23.375 [2024-07-25 15:56:41.250765] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b4d9 00:08:23.375 [2024-07-25 15:56:41.251280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab328355 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.375 [2024-07-25 15:56:41.251307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.375 #20 NEW cov: 12292 ft: 14145 corp: 18/192b lim: 30 exec/s: 20 rss: 72Mb L: 9/13 MS: 1 ShuffleBytes- 00:08:23.375 [2024-07-25 15:56:41.301051] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b4d9 00:08:23.375 [2024-07-25 15:56:41.301377] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b432 00:08:23.375 [2024-07-25 15:56:41.301891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab328355 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.375 [2024-07-25 15:56:41.301917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.375 [2024-07-25 15:56:41.302010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:170083ab cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.375 [2024-07-25 15:56:41.302024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.375 #21 NEW cov: 12292 ft: 14175 corp: 19/209b lim: 30 exec/s: 21 rss: 72Mb L: 17/17 MS: 1 PersAutoDict- DE: "\253+\2642U\331\027\000"- 00:08:23.634 [2024-07-25 15:56:41.371586] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000055d9 00:08:23.634 [2024-07-25 15:56:41.371908] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:08:23.634 [2024-07-25 15:56:41.372454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.634 [2024-07-25 15:56:41.372483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.634 [2024-07-25 15:56:41.372584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:60170200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.634 [2024-07-25 15:56:41.372603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.634 #22 NEW cov: 12292 ft: 14204 corp: 20/221b lim: 30 exec/s: 22 rss: 72Mb L: 12/17 MS: 1 ChangeByte- 00:08:23.634 [2024-07-25 15:56:41.441674] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000055d9 00:08:23.634 [2024-07-25 15:56:41.442197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.634 [2024-07-25 15:56:41.442225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.634 #23 NEW cov: 12292 ft: 14225 corp: 21/232b lim: 30 exec/s: 23 rss: 72Mb L: 11/17 MS: 1 EraseBytes- 00:08:23.634 [2024-07-25 15:56:41.512004] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003255 00:08:23.634 [2024-07-25 15:56:41.512534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b83b4 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.634 [2024-07-25 15:56:41.512561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.634 #24 NEW cov: 12292 ft: 14226 corp: 22/242b lim: 30 exec/s: 24 rss: 72Mb L: 10/17 MS: 1 InsertByte- 00:08:23.634 [2024-07-25 15:56:41.562516] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3355 00:08:23.634 [2024-07-25 15:56:41.563061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.634 [2024-07-25 15:56:41.563088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.634 #25 NEW cov: 12292 ft: 14229 corp: 23/251b lim: 30 exec/s: 25 rss: 72Mb L: 9/17 MS: 1 ChangeBit- 00:08:23.634 [2024-07-25 15:56:41.612662] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003255 00:08:23.634 [2024-07-25 15:56:41.613217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b83b4 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.634 [2024-07-25 15:56:41.613243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.894 #26 NEW cov: 12292 ft: 14243 corp: 24/261b lim: 30 exec/s: 26 rss: 72Mb L: 10/17 MS: 1 ShuffleBytes- 00:08:23.894 [2024-07-25 15:56:41.673392] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:23.894 [2024-07-25 15:56:41.673677] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:23.894 [2024-07-25 15:56:41.673976] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:23.894 [2024-07-25 15:56:41.674259] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x55d9 00:08:23.894 [2024-07-25 15:56:41.674782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:f6ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.674807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.894 [2024-07-25 15:56:41.674902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.674917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.894 [2024-07-25 15:56:41.675009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.675023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.894 [2024-07-25 15:56:41.675119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.675132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.894 #31 NEW cov: 12292 ft: 14801 corp: 25/285b lim: 30 exec/s: 31 rss: 72Mb L: 24/24 MS: 5 ChangeBinInt-CrossOver-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:08:23.894 [2024-07-25 15:56:41.723516] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000060d9 00:08:23.894 [2024-07-25 15:56:41.723827] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:08:23.894 [2024-07-25 15:56:41.724345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.724371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.894 [2024-07-25 15:56:41.724473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:55170200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.724487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.894 #32 NEW cov: 12292 ft: 14815 corp: 26/297b lim: 30 exec/s: 32 rss: 72Mb L: 12/24 MS: 1 ShuffleBytes- 00:08:23.894 [2024-07-25 15:56:41.773640] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000b432 00:08:23.894 [2024-07-25 15:56:41.774152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b83b4 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.774177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.894 #33 NEW cov: 12292 ft: 14829 corp: 27/307b lim: 30 exec/s: 33 rss: 72Mb L: 10/24 MS: 1 CrossOver- 00:08:23.894 [2024-07-25 15:56:41.833836] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (175280) > buf size (4096) 00:08:23.894 [2024-07-25 15:56:41.834362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b00b4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.894 [2024-07-25 15:56:41.834389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.894 #34 NEW cov: 12292 ft: 14843 corp: 28/318b lim: 30 exec/s: 34 rss: 73Mb L: 11/24 MS: 1 InsertByte- 00:08:24.153 [2024-07-25 15:56:41.894397] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:24.153 [2024-07-25 15:56:41.894695] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:08:24.153 [2024-07-25 15:56:41.895216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:abab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.153 [2024-07-25 15:56:41.895241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.153 [2024-07-25 15:56:41.895335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9170200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.153 [2024-07-25 15:56:41.895349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.153 #35 NEW cov: 12292 ft: 14862 corp: 29/330b lim: 30 exec/s: 35 rss: 73Mb L: 12/24 MS: 1 PersAutoDict- DE: "\253+\2642U\331\027\000"- 00:08:24.153 [2024-07-25 15:56:41.964596] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:24.153 [2024-07-25 15:56:41.964902] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d917 00:08:24.153 [2024-07-25 15:56:41.965414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.153 [2024-07-25 15:56:41.965442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.154 [2024-07-25 15:56:41.965538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2ed98155 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:41.965551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.154 #36 NEW cov: 12292 ft: 14863 corp: 30/344b lim: 30 exec/s: 36 rss: 73Mb L: 14/24 MS: 1 InsertByte- 00:08:24.154 [2024-07-25 15:56:42.015033] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b5b5 00:08:24.154 [2024-07-25 15:56:42.015337] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b5b5 00:08:24.154 [2024-07-25 15:56:42.015612] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b5b5 00:08:24.154 [2024-07-25 15:56:42.015906] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b5d9 00:08:24.154 [2024-07-25 15:56:42.016182] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (873620) > buf size (4096) 00:08:24.154 [2024-07-25 15:56:42.016717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a5581b5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.016743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.154 [2024-07-25 15:56:42.016837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:b5b581b5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.016852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.154 [2024-07-25 15:56:42.016949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b5b581b5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.016965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.154 [2024-07-25 15:56:42.017073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:b5b581b5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.017086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.154 [2024-07-25 15:56:42.017187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:552483d9 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.017200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:24.154 #37 NEW cov: 12292 ft: 14922 corp: 31/374b lim: 30 exec/s: 37 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:08:24.154 [2024-07-25 15:56:42.085210] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003255 00:08:24.154 [2024-07-25 15:56:42.085744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a83b4 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.085774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.154 #38 NEW cov: 12292 ft: 14932 corp: 32/384b lim: 30 exec/s: 38 rss: 73Mb L: 10/30 MS: 1 ChangeBinInt- 00:08:24.154 [2024-07-25 15:56:42.135790] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3255 00:08:24.154 [2024-07-25 15:56:42.136093] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (484448) > buf size (4096) 00:08:24.154 [2024-07-25 15:56:42.136593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aab002b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.136618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.154 [2024-07-25 15:56:42.136714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d9178155 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.154 [2024-07-25 15:56:42.136727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.414 #39 NEW cov: 12292 ft: 14985 corp: 33/397b lim: 30 exec/s: 39 rss: 73Mb L: 13/30 MS: 1 ChangeByte- 00:08:24.414 [2024-07-25 15:56:42.186258] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000060d9 00:08:24.414 [2024-07-25 15:56:42.186574] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005517 00:08:24.414 [2024-07-25 15:56:42.187099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ab2b02b4 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.414 [2024-07-25 15:56:42.187123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.414 [2024-07-25 15:56:42.187229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.414 [2024-07-25 15:56:42.187243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.414 #40 NEW cov: 12292 ft: 15017 corp: 34/413b lim: 30 exec/s: 20 rss: 73Mb L: 16/30 MS: 1 CrossOver- 00:08:24.414 #40 DONE cov: 12292 ft: 15017 corp: 34/413b lim: 30 exec/s: 20 rss: 73Mb 00:08:24.414 ###### Recommended dictionary. ###### 00:08:24.414 "\253+\2642U\331\027\000" # Uses: 4 00:08:24.414 ###### End of recommended dictionary. ###### 00:08:24.414 Done 40 runs in 2 second(s) 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:24.414 15:56:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:08:24.414 [2024-07-25 15:56:42.365667] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:24.414 [2024-07-25 15:56:42.365741] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165747 ] 00:08:24.414 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.673 [2024-07-25 15:56:42.543415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.673 [2024-07-25 15:56:42.608194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.932 [2024-07-25 15:56:42.667300] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.932 [2024-07-25 15:56:42.683536] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:24.932 INFO: Running with entropic power schedule (0xFF, 100). 00:08:24.932 INFO: Seed: 2835557100 00:08:24.932 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:24.932 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:24.932 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:24.932 INFO: A corpus is not provided, starting from an empty corpus 00:08:24.932 #2 INITED exec/s: 0 rss: 63Mb 00:08:24.932 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:24.932 This may also happen if the target rejected all inputs we tried so far 00:08:24.932 [2024-07-25 15:56:42.750441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a0a000a cdw11:00006500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.932 [2024-07-25 15:56:42.750475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.932 NEW_FUNC[1/700]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:24.932 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:24.932 #5 NEW cov: 11998 ft: 11999 corp: 2/8b lim: 35 exec/s: 0 rss: 71Mb L: 7/7 MS: 3 CopyPart-CrossOver-CMP- DE: "e\000\000\000"- 00:08:24.932 [2024-07-25 15:56:42.920779] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:24.932 [2024-07-25 15:56:42.921060] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:24.932 [2024-07-25 15:56:42.921525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.932 [2024-07-25 15:56:42.921566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.932 [2024-07-25 15:56:42.921652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.932 [2024-07-25 15:56:42.921668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.932 [2024-07-25 15:56:42.921763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.932 [2024-07-25 15:56:42.921778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.191 #13 NEW cov: 12120 ft: 12952 corp: 3/31b lim: 35 exec/s: 0 rss: 71Mb L: 23/23 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:08:25.191 [2024-07-25 15:56:42.970940] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.191 [2024-07-25 15:56:42.971200] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.191 [2024-07-25 15:56:42.971459] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.191 [2024-07-25 15:56:42.971962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.191 [2024-07-25 15:56:42.971990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.191 [2024-07-25 15:56:42.972081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.191 [2024-07-25 15:56:42.972097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.191 [2024-07-25 15:56:42.972186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.191 [2024-07-25 15:56:42.972202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.191 [2024-07-25 15:56:42.972285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.191 [2024-07-25 15:56:42.972301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.191 #14 NEW cov: 12126 ft: 13712 corp: 4/59b lim: 35 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:08:25.192 [2024-07-25 15:56:43.041335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a65000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.192 [2024-07-25 15:56:43.041360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.192 #25 NEW cov: 12211 ft: 14003 corp: 5/66b lim: 35 exec/s: 0 rss: 72Mb L: 7/28 MS: 1 PersAutoDict- DE: "e\000\000\000"- 00:08:25.192 [2024-07-25 15:56:43.112497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aaf000a cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.192 [2024-07-25 15:56:43.112521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.192 [2024-07-25 15:56:43.112611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.192 [2024-07-25 15:56:43.112624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.192 [2024-07-25 15:56:43.112712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.192 [2024-07-25 15:56:43.112724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.192 [2024-07-25 15:56:43.112817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:afaf00af cdw11:0000af65 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.192 [2024-07-25 15:56:43.112831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.192 #26 NEW cov: 12211 ft: 14077 corp: 6/96b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:08:25.192 [2024-07-25 15:56:43.181842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a0a000a cdw11:00005f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.192 [2024-07-25 15:56:43.181865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.451 #27 NEW cov: 12211 ft: 14166 corp: 7/103b lim: 35 exec/s: 0 rss: 72Mb L: 7/30 MS: 1 ChangeBinInt- 00:08:25.451 [2024-07-25 15:56:43.232026] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.451 [2024-07-25 15:56:43.232311] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.451 [2024-07-25 15:56:43.232577] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.451 [2024-07-25 15:56:43.233049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.451 [2024-07-25 15:56:43.233077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.451 [2024-07-25 15:56:43.233165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.451 [2024-07-25 15:56:43.233182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.451 [2024-07-25 15:56:43.233276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.451 [2024-07-25 15:56:43.233290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.233382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.233398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.452 #28 NEW cov: 12211 ft: 14243 corp: 8/131b lim: 35 exec/s: 0 rss: 72Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:08:25.452 [2024-07-25 15:56:43.282246] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.282505] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.282973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.282999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.283096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00006500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.283113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.283207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.283224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.452 #29 NEW cov: 12211 ft: 14327 corp: 9/158b lim: 35 exec/s: 0 rss: 72Mb L: 27/30 MS: 1 PersAutoDict- DE: "e\000\000\000"- 00:08:25.452 [2024-07-25 15:56:43.332534] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.332793] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.333079] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.333552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.333577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.333666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.333681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.333778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.333799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.333898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:81000081 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.333913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.452 #35 NEW cov: 12211 ft: 14347 corp: 10/189b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:25.452 [2024-07-25 15:56:43.402901] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.403189] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.403480] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.452 [2024-07-25 15:56:43.404025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.404051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.404144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:65000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.404159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.404248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.404264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.452 [2024-07-25 15:56:43.404355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:81000081 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.452 [2024-07-25 15:56:43.404373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.452 #36 NEW cov: 12211 ft: 14386 corp: 11/220b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 PersAutoDict- DE: "e\000\000\000"- 00:08:25.711 [2024-07-25 15:56:43.473409] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.473722] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.473987] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.474468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.474496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.474585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.474603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.474695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.474711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.474793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.474812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.711 #37 NEW cov: 12211 ft: 14404 corp: 12/253b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:08:25.711 [2024-07-25 15:56:43.523837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a0a000a cdw11:00005f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.523861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.711 #38 NEW cov: 12211 ft: 14442 corp: 13/262b lim: 35 exec/s: 0 rss: 72Mb L: 9/33 MS: 1 CopyPart- 00:08:25.711 [2024-07-25 15:56:43.584339] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.584626] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.585103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.585129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.585219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.585236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.585325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.585340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.711 #39 NEW cov: 12211 ft: 14464 corp: 14/285b lim: 35 exec/s: 0 rss: 72Mb L: 23/33 MS: 1 ChangeBinInt- 00:08:25.711 [2024-07-25 15:56:43.634833] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.635159] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.635457] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.711 [2024-07-25 15:56:43.635956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000017 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.635981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.636066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.636082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.636169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.636186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.711 [2024-07-25 15:56:43.636272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.711 [2024-07-25 15:56:43.636285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.711 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:25.711 #40 NEW cov: 12234 ft: 14576 corp: 15/314b lim: 35 exec/s: 0 rss: 72Mb L: 29/33 MS: 1 CopyPart- 00:08:25.970 [2024-07-25 15:56:43.704684] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.970 [2024-07-25 15:56:43.705169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.705196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.970 #48 NEW cov: 12234 ft: 14640 corp: 16/321b lim: 35 exec/s: 48 rss: 72Mb L: 7/33 MS: 3 CrossOver-CMP-InsertRepeatedBytes- DE: "\000\020"- 00:08:25.970 [2024-07-25 15:56:43.755598] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.970 [2024-07-25 15:56:43.755910] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.970 [2024-07-25 15:56:43.756180] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.970 [2024-07-25 15:56:43.756457] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.970 [2024-07-25 15:56:43.756952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.756978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.970 [2024-07-25 15:56:43.757069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.757084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.970 [2024-07-25 15:56:43.757168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.757182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.970 [2024-07-25 15:56:43.757270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.757285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.970 [2024-07-25 15:56:43.757372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:81810000 cdw11:00008100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.757388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:25.970 #49 NEW cov: 12234 ft: 14798 corp: 17/356b lim: 35 exec/s: 49 rss: 72Mb L: 35/35 MS: 1 CMP- DE: "\377\015"- 00:08:25.970 [2024-07-25 15:56:43.826009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0c65000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.826034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.970 #50 NEW cov: 12234 ft: 14817 corp: 18/363b lim: 35 exec/s: 50 rss: 72Mb L: 7/35 MS: 1 ChangeBinInt- 00:08:25.970 [2024-07-25 15:56:43.876223] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.970 [2024-07-25 15:56:43.876718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1700000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.876743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.970 [2024-07-25 15:56:43.876836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.970 [2024-07-25 15:56:43.876853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.970 #51 NEW cov: 12234 ft: 15019 corp: 19/383b lim: 35 exec/s: 51 rss: 72Mb L: 20/35 MS: 1 EraseBytes- 00:08:25.970 [2024-07-25 15:56:43.926480] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.970 [2024-07-25 15:56:43.926974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1700000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.971 [2024-07-25 15:56:43.927000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.971 [2024-07-25 15:56:43.927083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.971 [2024-07-25 15:56:43.927099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.230 #52 NEW cov: 12234 ft: 15056 corp: 20/403b lim: 35 exec/s: 52 rss: 72Mb L: 20/35 MS: 1 ChangeBinInt- 00:08:26.230 [2024-07-25 15:56:43.996845] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.230 [2024-07-25 15:56:43.997115] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.230 [2024-07-25 15:56:43.997410] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.230 [2024-07-25 15:56:43.997872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:43.997901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.230 [2024-07-25 15:56:43.997998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:43.998014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.230 [2024-07-25 15:56:43.998111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:43.998128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.230 [2024-07-25 15:56:43.998218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:43.998238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.230 #53 NEW cov: 12234 ft: 15121 corp: 21/433b lim: 35 exec/s: 53 rss: 72Mb L: 30/35 MS: 1 PersAutoDict- DE: "\000\020"- 00:08:26.230 [2024-07-25 15:56:44.047126] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.230 [2024-07-25 15:56:44.047408] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.230 [2024-07-25 15:56:44.047661] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.230 [2024-07-25 15:56:44.048129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:44.048157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.230 [2024-07-25 15:56:44.048250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:10000000 cdw11:65000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:44.048267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.230 [2024-07-25 15:56:44.048363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:44.048383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.230 [2024-07-25 15:56:44.048478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:44.048495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.230 #54 NEW cov: 12234 ft: 15135 corp: 22/462b lim: 35 exec/s: 54 rss: 72Mb L: 29/35 MS: 1 PersAutoDict- DE: "\000\020"- 00:08:26.230 [2024-07-25 15:56:44.117459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a0a000a cdw11:00005f65 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.230 [2024-07-25 15:56:44.117485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.231 #55 NEW cov: 12234 ft: 15155 corp: 23/473b lim: 35 exec/s: 55 rss: 72Mb L: 11/35 MS: 1 PersAutoDict- DE: "e\000\000\000"- 00:08:26.231 [2024-07-25 15:56:44.167455] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.231 [2024-07-25 15:56:44.167970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:0000ff07 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.231 [2024-07-25 15:56:44.168006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.231 #56 NEW cov: 12234 ft: 15168 corp: 24/480b lim: 35 exec/s: 56 rss: 73Mb L: 7/35 MS: 1 ChangeBinInt- 00:08:26.491 [2024-07-25 15:56:44.228605] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.491 [2024-07-25 15:56:44.229114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1700000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.229140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.229237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.229249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.229342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.229357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.491 #57 NEW cov: 12234 ft: 15181 corp: 25/502b lim: 35 exec/s: 57 rss: 73Mb L: 22/35 MS: 1 PersAutoDict- DE: "\000\020"- 00:08:26.491 [2024-07-25 15:56:44.279636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aaf000a cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.279660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.279756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.279774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.279859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.279872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.279959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:afaf00af cdw11:6500afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.279975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.491 #58 NEW cov: 12234 ft: 15184 corp: 26/533b lim: 35 exec/s: 58 rss: 73Mb L: 31/35 MS: 1 InsertByte- 00:08:26.491 [2024-07-25 15:56:44.349974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aaf000a cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.349996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.350115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.350129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.350218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.350231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.350340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:afaf00af cdw11:5f000a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.350353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.491 #59 NEW cov: 12234 ft: 15196 corp: 27/567b lim: 35 exec/s: 59 rss: 73Mb L: 34/35 MS: 1 CrossOver- 00:08:26.491 [2024-07-25 15:56:44.399084] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.491 [2024-07-25 15:56:44.399351] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.491 [2024-07-25 15:56:44.399616] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.491 [2024-07-25 15:56:44.399892] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.491 [2024-07-25 15:56:44.400376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.400404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.400493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.400509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.400595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.400609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.491 [2024-07-25 15:56:44.400700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:81000081 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.400717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.491 #60 NEW cov: 12234 ft: 15213 corp: 28/598b lim: 35 exec/s: 60 rss: 73Mb L: 31/35 MS: 1 CopyPart- 00:08:26.491 [2024-07-25 15:56:44.449808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a65000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.491 [2024-07-25 15:56:44.449831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.491 #61 NEW cov: 12234 ft: 15245 corp: 29/605b lim: 35 exec/s: 61 rss: 73Mb L: 7/35 MS: 1 CopyPart- 00:08:26.751 [2024-07-25 15:56:44.499990] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.500490] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.500749] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.501227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.501255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.501351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.501365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.501455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.501471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.501563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.501578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.751 #62 NEW cov: 12234 ft: 15252 corp: 30/634b lim: 35 exec/s: 62 rss: 73Mb L: 29/35 MS: 1 CrossOver- 00:08:26.751 [2024-07-25 15:56:44.551445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aaf000a cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.551469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.551564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.551577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.551665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:af4100af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.551678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.551772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:afaf00af cdw11:af00afaf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.551787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.751 #63 NEW cov: 12234 ft: 15255 corp: 31/666b lim: 35 exec/s: 63 rss: 73Mb L: 32/35 MS: 1 InsertByte- 00:08:26.751 [2024-07-25 15:56:44.610903] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.611192] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.611464] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.611942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.611968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.612068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.612085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.612166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.612180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.612277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.612291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.751 #64 NEW cov: 12234 ft: 15263 corp: 32/697b lim: 35 exec/s: 64 rss: 73Mb L: 31/35 MS: 1 CopyPart- 00:08:26.751 [2024-07-25 15:56:44.661162] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.661437] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.661698] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:26.751 [2024-07-25 15:56:44.662185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.751 [2024-07-25 15:56:44.662211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.751 [2024-07-25 15:56:44.662292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:65000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.752 [2024-07-25 15:56:44.662306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.752 [2024-07-25 15:56:44.662395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.752 [2024-07-25 15:56:44.662410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.752 [2024-07-25 15:56:44.662500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:81000081 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.752 [2024-07-25 15:56:44.662516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.752 #65 NEW cov: 12234 ft: 15265 corp: 33/731b lim: 35 exec/s: 65 rss: 73Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:08:26.752 [2024-07-25 15:56:44.722443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.752 [2024-07-25 15:56:44.722469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.752 [2024-07-25 15:56:44.722568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.752 [2024-07-25 15:56:44.722582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.752 [2024-07-25 15:56:44.722682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.752 [2024-07-25 15:56:44.722695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.752 [2024-07-25 15:56:44.722793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.752 [2024-07-25 15:56:44.722807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:27.022 #66 NEW cov: 12234 ft: 15266 corp: 34/764b lim: 35 exec/s: 33 rss: 73Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:08:27.022 #66 DONE cov: 12234 ft: 15266 corp: 34/764b lim: 35 exec/s: 33 rss: 73Mb 00:08:27.022 ###### Recommended dictionary. ###### 00:08:27.022 "e\000\000\000" # Uses: 5 00:08:27.022 "\000\020" # Uses: 3 00:08:27.022 "\377\015" # Uses: 0 00:08:27.022 ###### End of recommended dictionary. ###### 00:08:27.022 Done 66 runs in 2 second(s) 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:27.022 15:56:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:08:27.022 [2024-07-25 15:56:44.922115] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:27.022 [2024-07-25 15:56:44.922188] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166176 ] 00:08:27.022 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.284 [2024-07-25 15:56:45.101446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.284 [2024-07-25 15:56:45.167072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.284 [2024-07-25 15:56:45.225766] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.284 [2024-07-25 15:56:45.241999] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:27.284 INFO: Running with entropic power schedule (0xFF, 100). 00:08:27.284 INFO: Seed: 1097567877 00:08:27.543 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:27.543 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:27.543 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:27.543 INFO: A corpus is not provided, starting from an empty corpus 00:08:27.543 #2 INITED exec/s: 0 rss: 63Mb 00:08:27.543 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:27.543 This may also happen if the target rejected all inputs we tried so far 00:08:27.543 [2024-07-25 15:56:45.291712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.543 [2024-07-25 15:56:45.291743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.543 NEW_FUNC[1/706]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:27.543 NEW_FUNC[2/706]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:27.543 #3 NEW cov: 12151 ft: 12146 corp: 2/10b lim: 20 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CMP- DE: "\015\000\000\000\000\000\000\000"- 00:08:27.543 #4 NEW cov: 12282 ft: 12853 corp: 3/27b lim: 20 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:08:27.543 [2024-07-25 15:56:45.491874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.543 [2024-07-25 15:56:45.491908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.543 NEW_FUNC[1/3]: 0x134b840 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:784 00:08:27.543 NEW_FUNC[2/3]: 0x136e920 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3577 00:08:27.543 #6 NEW cov: 12367 ft: 13311 corp: 4/37b lim: 20 exec/s: 0 rss: 71Mb L: 10/17 MS: 2 CrossOver-CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:27.802 #7 NEW cov: 12455 ft: 13661 corp: 5/46b lim: 20 exec/s: 0 rss: 72Mb L: 9/17 MS: 1 ChangeByte- 00:08:27.802 [2024-07-25 15:56:45.592147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.802 [2024-07-25 15:56:45.592173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.802 #8 NEW cov: 12455 ft: 13730 corp: 6/56b lim: 20 exec/s: 0 rss: 72Mb L: 10/17 MS: 1 ChangeBinInt- 00:08:27.802 #11 NEW cov: 12455 ft: 14021 corp: 7/60b lim: 20 exec/s: 0 rss: 72Mb L: 4/17 MS: 3 ChangeByte-InsertByte-CopyPart- 00:08:27.802 #12 NEW cov: 12455 ft: 14098 corp: 8/70b lim: 20 exec/s: 0 rss: 72Mb L: 10/17 MS: 1 InsertRepeatedBytes- 00:08:27.802 [2024-07-25 15:56:45.732569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.802 [2024-07-25 15:56:45.732592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.802 #13 NEW cov: 12455 ft: 14221 corp: 9/80b lim: 20 exec/s: 0 rss: 72Mb L: 10/17 MS: 1 ChangeByte- 00:08:27.802 [2024-07-25 15:56:45.782964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:27.802 [2024-07-25 15:56:45.782987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.062 #14 NEW cov: 12455 ft: 14374 corp: 10/97b lim: 20 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:28.062 [2024-07-25 15:56:45.832918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.062 [2024-07-25 15:56:45.832941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.062 #15 NEW cov: 12459 ft: 14558 corp: 11/110b lim: 20 exec/s: 0 rss: 72Mb L: 13/17 MS: 1 CopyPart- 00:08:28.062 [2024-07-25 15:56:45.883238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.062 [2024-07-25 15:56:45.883267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.062 #16 NEW cov: 12459 ft: 14687 corp: 12/130b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 CrossOver- 00:08:28.062 #17 NEW cov: 12459 ft: 14742 corp: 13/136b lim: 20 exec/s: 0 rss: 72Mb L: 6/20 MS: 1 CopyPart- 00:08:28.062 [2024-07-25 15:56:45.973534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.062 [2024-07-25 15:56:45.973556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.062 #18 NEW cov: 12459 ft: 14806 corp: 14/156b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeByte- 00:08:28.321 #19 NEW cov: 12459 ft: 14855 corp: 15/162b lim: 20 exec/s: 0 rss: 72Mb L: 6/20 MS: 1 EraseBytes- 00:08:28.321 #23 NEW cov: 12459 ft: 14944 corp: 16/168b lim: 20 exec/s: 0 rss: 72Mb L: 6/20 MS: 4 InsertByte-InsertByte-ChangeBit-InsertRepeatedBytes- 00:08:28.321 [2024-07-25 15:56:46.113907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.321 [2024-07-25 15:56:46.113931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.321 #24 NEW cov: 12459 ft: 14969 corp: 17/186b lim: 20 exec/s: 0 rss: 72Mb L: 18/20 MS: 1 InsertByte- 00:08:28.321 [2024-07-25 15:56:46.154013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.321 [2024-07-25 15:56:46.154037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.321 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:28.321 #25 NEW cov: 12482 ft: 14983 corp: 18/206b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:08:28.321 [2024-07-25 15:56:46.193698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.321 [2024-07-25 15:56:46.193721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.321 #28 NEW cov: 12482 ft: 15019 corp: 19/217b lim: 20 exec/s: 0 rss: 72Mb L: 11/20 MS: 3 EraseBytes-ShuffleBytes-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:28.321 #29 NEW cov: 12482 ft: 15027 corp: 20/226b lim: 20 exec/s: 0 rss: 72Mb L: 9/20 MS: 1 ChangeByte- 00:08:28.580 #30 NEW cov: 12482 ft: 15034 corp: 21/243b lim: 20 exec/s: 30 rss: 72Mb L: 17/20 MS: 1 CrossOver- 00:08:28.580 #31 NEW cov: 12482 ft: 15042 corp: 22/261b lim: 20 exec/s: 31 rss: 72Mb L: 18/20 MS: 1 InsertRepeatedBytes- 00:08:28.580 #32 NEW cov: 12482 ft: 15085 corp: 23/279b lim: 20 exec/s: 32 rss: 72Mb L: 18/20 MS: 1 CrossOver- 00:08:28.580 #33 NEW cov: 12482 ft: 15104 corp: 24/296b lim: 20 exec/s: 33 rss: 72Mb L: 17/20 MS: 1 ChangeBinInt- 00:08:28.580 [2024-07-25 15:56:46.464566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.580 [2024-07-25 15:56:46.464589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.580 #34 NEW cov: 12482 ft: 15137 corp: 25/306b lim: 20 exec/s: 34 rss: 72Mb L: 10/20 MS: 1 ShuffleBytes- 00:08:28.580 #35 NEW cov: 12482 ft: 15147 corp: 26/324b lim: 20 exec/s: 35 rss: 72Mb L: 18/20 MS: 1 ChangeByte- 00:08:28.580 [2024-07-25 15:56:46.554924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.580 [2024-07-25 15:56:46.554948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.840 #36 NEW cov: 12482 ft: 15156 corp: 27/338b lim: 20 exec/s: 36 rss: 72Mb L: 14/20 MS: 1 EraseBytes- 00:08:28.840 [2024-07-25 15:56:46.594937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.840 [2024-07-25 15:56:46.594963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.840 #37 NEW cov: 12482 ft: 15210 corp: 28/347b lim: 20 exec/s: 37 rss: 72Mb L: 9/20 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:28.840 #38 NEW cov: 12482 ft: 15233 corp: 29/360b lim: 20 exec/s: 38 rss: 72Mb L: 13/20 MS: 1 InsertRepeatedBytes- 00:08:28.840 [2024-07-25 15:56:46.675101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.840 [2024-07-25 15:56:46.675125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.840 #39 NEW cov: 12482 ft: 15244 corp: 30/368b lim: 20 exec/s: 39 rss: 73Mb L: 8/20 MS: 1 CopyPart- 00:08:28.840 #40 NEW cov: 12482 ft: 15267 corp: 31/386b lim: 20 exec/s: 40 rss: 73Mb L: 18/20 MS: 1 CopyPart- 00:08:28.840 [2024-07-25 15:56:46.775739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.840 [2024-07-25 15:56:46.775767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.840 #41 NEW cov: 12482 ft: 15318 corp: 32/404b lim: 20 exec/s: 41 rss: 73Mb L: 18/20 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:29.098 #42 NEW cov: 12482 ft: 15324 corp: 33/417b lim: 20 exec/s: 42 rss: 73Mb L: 13/20 MS: 1 ChangeBit- 00:08:29.098 [2024-07-25 15:56:46.875871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.098 [2024-07-25 15:56:46.875895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.098 #43 NEW cov: 12482 ft: 15336 corp: 34/435b lim: 20 exec/s: 43 rss: 73Mb L: 18/20 MS: 1 ChangeBinInt- 00:08:29.098 #44 NEW cov: 12482 ft: 15342 corp: 35/445b lim: 20 exec/s: 44 rss: 73Mb L: 10/20 MS: 1 ChangeBinInt- 00:08:29.099 [2024-07-25 15:56:46.966118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.099 [2024-07-25 15:56:46.966142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.099 #45 NEW cov: 12482 ft: 15358 corp: 36/459b lim: 20 exec/s: 45 rss: 73Mb L: 14/20 MS: 1 InsertRepeatedBytes- 00:08:29.099 #46 NEW cov: 12482 ft: 15373 corp: 37/470b lim: 20 exec/s: 46 rss: 73Mb L: 11/20 MS: 1 InsertByte- 00:08:29.099 #47 NEW cov: 12482 ft: 15382 corp: 38/488b lim: 20 exec/s: 47 rss: 73Mb L: 18/20 MS: 1 InsertByte- 00:08:29.358 [2024-07-25 15:56:47.096754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.358 [2024-07-25 15:56:47.096785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.358 #48 NEW cov: 12482 ft: 15423 corp: 39/508b lim: 20 exec/s: 48 rss: 73Mb L: 20/20 MS: 1 ShuffleBytes- 00:08:29.358 [2024-07-25 15:56:47.156801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.358 [2024-07-25 15:56:47.156825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.358 #49 NEW cov: 12482 ft: 15466 corp: 40/528b lim: 20 exec/s: 49 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:08:29.358 #50 NEW cov: 12482 ft: 15475 corp: 41/546b lim: 20 exec/s: 50 rss: 74Mb L: 18/20 MS: 1 ChangeBit- 00:08:29.358 [2024-07-25 15:56:47.246773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.358 [2024-07-25 15:56:47.246797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.358 #51 NEW cov: 12482 ft: 15539 corp: 42/555b lim: 20 exec/s: 51 rss: 74Mb L: 9/20 MS: 1 EraseBytes- 00:08:29.358 [2024-07-25 15:56:47.286801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.358 [2024-07-25 15:56:47.286825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.358 #52 NEW cov: 12482 ft: 15558 corp: 43/565b lim: 20 exec/s: 26 rss: 74Mb L: 10/20 MS: 1 CopyPart- 00:08:29.358 #52 DONE cov: 12482 ft: 15558 corp: 43/565b lim: 20 exec/s: 26 rss: 74Mb 00:08:29.359 ###### Recommended dictionary. ###### 00:08:29.359 "\015\000\000\000\000\000\000\000" # Uses: 0 00:08:29.359 "\000\000\000\000\000\000\000\000" # Uses: 3 00:08:29.359 ###### End of recommended dictionary. ###### 00:08:29.359 Done 52 runs in 2 second(s) 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:29.618 15:56:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:08:29.618 [2024-07-25 15:56:47.476638] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:29.618 [2024-07-25 15:56:47.476700] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166614 ] 00:08:29.618 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.878 [2024-07-25 15:56:47.649023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.878 [2024-07-25 15:56:47.713597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.878 [2024-07-25 15:56:47.772112] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.878 [2024-07-25 15:56:47.788355] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:08:29.878 INFO: Running with entropic power schedule (0xFF, 100). 00:08:29.878 INFO: Seed: 3645566797 00:08:29.878 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:29.878 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:29.878 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:29.878 INFO: A corpus is not provided, starting from an empty corpus 00:08:29.878 #2 INITED exec/s: 0 rss: 63Mb 00:08:29.878 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:29.878 This may also happen if the target rejected all inputs we tried so far 00:08:29.878 [2024-07-25 15:56:47.843953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:29.878 [2024-07-25 15:56:47.843984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.878 [2024-07-25 15:56:47.844046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:29.878 [2024-07-25 15:56:47.844060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.137 NEW_FUNC[1/699]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:08:30.137 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:30.137 #19 NEW cov: 12011 ft: 12010 corp: 2/17b lim: 35 exec/s: 0 rss: 71Mb L: 16/16 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:30.137 [2024-07-25 15:56:47.994579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:47.994616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:47.994672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:47.994685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:47.994739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:47.994752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:47.994811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:47.994823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.137 NEW_FUNC[1/2]: 0xffb5a0 in posix_sock_readv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1546 00:08:30.137 NEW_FUNC[2/2]: 0x16092d0 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1528 00:08:30.137 #20 NEW cov: 12132 ft: 12795 corp: 3/49b lim: 35 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 CrossOver- 00:08:30.137 [2024-07-25 15:56:48.054618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.054642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:48.054693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.054704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:48.054752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.054770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:48.054817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.054827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.137 #21 NEW cov: 12138 ft: 13006 corp: 4/82b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 InsertByte- 00:08:30.137 [2024-07-25 15:56:48.104791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.104814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:48.104865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000008b cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.104875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:48.104924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.104935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.137 [2024-07-25 15:56:48.104985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.137 [2024-07-25 15:56:48.104995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.396 #22 NEW cov: 12223 ft: 13309 corp: 5/115b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeByte- 00:08:30.396 [2024-07-25 15:56:48.154581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.154605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.396 [2024-07-25 15:56:48.154654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:58000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.154665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.396 #23 NEW cov: 12223 ft: 13483 corp: 6/132b lim: 35 exec/s: 0 rss: 72Mb L: 17/33 MS: 1 InsertByte- 00:08:30.396 [2024-07-25 15:56:48.194679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.194701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.396 [2024-07-25 15:56:48.194752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.194767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.396 #24 NEW cov: 12223 ft: 13598 corp: 7/148b lim: 35 exec/s: 0 rss: 72Mb L: 16/33 MS: 1 ChangeBit- 00:08:30.396 [2024-07-25 15:56:48.234852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.234874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.396 [2024-07-25 15:56:48.234941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.234955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.396 #25 NEW cov: 12223 ft: 13702 corp: 8/164b lim: 35 exec/s: 0 rss: 72Mb L: 16/33 MS: 1 ChangeBinInt- 00:08:30.396 [2024-07-25 15:56:48.284797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.284818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.396 #26 NEW cov: 12223 ft: 14464 corp: 9/177b lim: 35 exec/s: 0 rss: 72Mb L: 13/33 MS: 1 EraseBytes- 00:08:30.396 [2024-07-25 15:56:48.325425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.325446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.396 [2024-07-25 15:56:48.325512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000077 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.325522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.396 [2024-07-25 15:56:48.325571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.325581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.396 [2024-07-25 15:56:48.325631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.396 [2024-07-25 15:56:48.325641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.396 #27 NEW cov: 12223 ft: 14528 corp: 10/210b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeByte- 00:08:30.397 [2024-07-25 15:56:48.375528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.397 [2024-07-25 15:56:48.375551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.397 [2024-07-25 15:56:48.375602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000008b cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.397 [2024-07-25 15:56:48.375613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.397 [2024-07-25 15:56:48.375664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.397 [2024-07-25 15:56:48.375674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.397 [2024-07-25 15:56:48.375724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.397 [2024-07-25 15:56:48.375734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.656 #28 NEW cov: 12223 ft: 14573 corp: 11/243b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeByte- 00:08:30.656 [2024-07-25 15:56:48.415475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.415496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.415548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.415558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.415608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.415618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.656 #29 NEW cov: 12223 ft: 14788 corp: 12/267b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 EraseBytes- 00:08:30.656 [2024-07-25 15:56:48.455752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.455778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.455830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:16000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.455840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.455888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.455898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.455946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.455956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.656 #30 NEW cov: 12223 ft: 14803 corp: 13/300b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeBinInt- 00:08:30.656 [2024-07-25 15:56:48.495893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.495914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.495981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:77000000 cdw11:00f10000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.495992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.496042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00003b00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.496052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.496101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.496111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.656 #31 NEW cov: 12223 ft: 14826 corp: 14/334b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:08:30.656 [2024-07-25 15:56:48.546043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.546065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.546119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.546130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.546178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.546189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.546237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.546247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.656 #32 NEW cov: 12223 ft: 14843 corp: 15/363b lim: 35 exec/s: 0 rss: 72Mb L: 29/34 MS: 1 EraseBytes- 00:08:30.656 [2024-07-25 15:56:48.586183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.586205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.586271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.586282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.586331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:30000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.586341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.586388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.586398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.656 #33 NEW cov: 12223 ft: 14874 corp: 16/396b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeByte- 00:08:30.656 [2024-07-25 15:56:48.626283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.626305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.626356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:16000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.626366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.626417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.626428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.656 [2024-07-25 15:56:48.626475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.656 [2024-07-25 15:56:48.626485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.915 #34 NEW cov: 12223 ft: 14899 corp: 17/429b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 CopyPart- 00:08:30.915 [2024-07-25 15:56:48.676421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.676443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.915 [2024-07-25 15:56:48.676509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:16000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.676520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.915 [2024-07-25 15:56:48.676568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.676578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.915 [2024-07-25 15:56:48.676627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.676637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.915 #35 NEW cov: 12223 ft: 14945 corp: 18/462b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:30.915 [2024-07-25 15:56:48.716204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.716226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.915 [2024-07-25 15:56:48.716294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.716304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.915 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:30.915 #36 NEW cov: 12246 ft: 14991 corp: 19/479b lim: 35 exec/s: 0 rss: 72Mb L: 17/34 MS: 1 InsertByte- 00:08:30.915 [2024-07-25 15:56:48.766177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.766199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.915 #37 NEW cov: 12246 ft: 15018 corp: 20/492b lim: 35 exec/s: 0 rss: 72Mb L: 13/34 MS: 1 EraseBytes- 00:08:30.915 [2024-07-25 15:56:48.806804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.806825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.915 [2024-07-25 15:56:48.806892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.915 [2024-07-25 15:56:48.806903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.916 [2024-07-25 15:56:48.806961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.916 [2024-07-25 15:56:48.806972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.916 [2024-07-25 15:56:48.807020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.916 [2024-07-25 15:56:48.807033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.916 #38 NEW cov: 12246 ft: 15041 corp: 21/524b lim: 35 exec/s: 0 rss: 72Mb L: 32/34 MS: 1 ShuffleBytes- 00:08:30.916 [2024-07-25 15:56:48.846724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.916 [2024-07-25 15:56:48.846745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.916 [2024-07-25 15:56:48.846814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.916 [2024-07-25 15:56:48.846826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.916 [2024-07-25 15:56:48.846875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:48484848 cdw11:48480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.916 [2024-07-25 15:56:48.846886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.916 #39 NEW cov: 12246 ft: 15055 corp: 22/551b lim: 35 exec/s: 39 rss: 72Mb L: 27/34 MS: 1 InsertRepeatedBytes- 00:08:30.916 [2024-07-25 15:56:48.886669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.916 [2024-07-25 15:56:48.886689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.916 [2024-07-25 15:56:48.886755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:58000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:30.916 [2024-07-25 15:56:48.886771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.174 #40 NEW cov: 12246 ft: 15086 corp: 23/568b lim: 35 exec/s: 40 rss: 72Mb L: 17/34 MS: 1 CopyPart- 00:08:31.174 [2024-07-25 15:56:48.936825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.174 [2024-07-25 15:56:48.936846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.174 [2024-07-25 15:56:48.936914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000d503 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.174 [2024-07-25 15:56:48.936925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.174 #41 NEW cov: 12246 ft: 15127 corp: 24/585b lim: 35 exec/s: 41 rss: 72Mb L: 17/34 MS: 1 CMP- DE: "\325\003\000\000\000\000\000\000"- 00:08:31.174 [2024-07-25 15:56:48.986963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f132 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.174 [2024-07-25 15:56:48.986984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.174 [2024-07-25 15:56:48.987050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:58000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.174 [2024-07-25 15:56:48.987061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.174 #42 NEW cov: 12246 ft: 15138 corp: 25/602b lim: 35 exec/s: 42 rss: 72Mb L: 17/34 MS: 1 ChangeByte- 00:08:31.174 [2024-07-25 15:56:49.027248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.174 [2024-07-25 15:56:49.027269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.175 [2024-07-25 15:56:49.027337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.175 [2024-07-25 15:56:49.027349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.175 [2024-07-25 15:56:49.027398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.175 [2024-07-25 15:56:49.027409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.175 #43 NEW cov: 12246 ft: 15145 corp: 26/627b lim: 35 exec/s: 43 rss: 72Mb L: 25/34 MS: 1 EraseBytes- 00:08:31.175 [2024-07-25 15:56:49.077198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.175 [2024-07-25 15:56:49.077220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.175 [2024-07-25 15:56:49.077270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00580000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.175 [2024-07-25 15:56:49.077280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.175 #44 NEW cov: 12246 ft: 15161 corp: 27/645b lim: 35 exec/s: 44 rss: 73Mb L: 18/34 MS: 1 InsertByte- 00:08:31.175 [2024-07-25 15:56:49.127379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.175 [2024-07-25 15:56:49.127402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.175 [2024-07-25 15:56:49.127453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:58000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.175 [2024-07-25 15:56:49.127464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.175 #45 NEW cov: 12246 ft: 15173 corp: 28/662b lim: 35 exec/s: 45 rss: 73Mb L: 17/34 MS: 1 ChangeBinInt- 00:08:31.433 [2024-07-25 15:56:49.167770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.433 [2024-07-25 15:56:49.167792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.433 [2024-07-25 15:56:49.167844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.433 [2024-07-25 15:56:49.167853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.433 [2024-07-25 15:56:49.167906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.433 [2024-07-25 15:56:49.167918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.433 [2024-07-25 15:56:49.167965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.167975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.434 #46 NEW cov: 12246 ft: 15184 corp: 29/695b lim: 35 exec/s: 46 rss: 73Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:31.434 [2024-07-25 15:56:49.207586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.207607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.207677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f1000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.207688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.434 #47 NEW cov: 12246 ft: 15187 corp: 30/709b lim: 35 exec/s: 47 rss: 73Mb L: 14/34 MS: 1 CrossOver- 00:08:31.434 [2024-07-25 15:56:49.258022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.258043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.258093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.258104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.258154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff48ffff cdw11:48480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.258165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.258213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:48484848 cdw11:48000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.258223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.434 #48 NEW cov: 12246 ft: 15188 corp: 31/739b lim: 35 exec/s: 48 rss: 73Mb L: 30/34 MS: 1 InsertRepeatedBytes- 00:08:31.434 [2024-07-25 15:56:49.308015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.308038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.308086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.308097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.308148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.308159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.434 #49 NEW cov: 12246 ft: 15222 corp: 32/764b lim: 35 exec/s: 49 rss: 73Mb L: 25/34 MS: 1 CopyPart- 00:08:31.434 [2024-07-25 15:56:49.347996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.348019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.348072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.348083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.434 #50 NEW cov: 12246 ft: 15303 corp: 33/780b lim: 35 exec/s: 50 rss: 73Mb L: 16/34 MS: 1 ChangeBinInt- 00:08:31.434 [2024-07-25 15:56:49.388123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.388148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.434 [2024-07-25 15:56:49.388215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00bd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.434 [2024-07-25 15:56:49.388225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.434 #51 NEW cov: 12246 ft: 15340 corp: 34/797b lim: 35 exec/s: 51 rss: 73Mb L: 17/34 MS: 1 InsertByte- 00:08:31.693 [2024-07-25 15:56:49.428234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f1fb cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.428256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.428305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.428317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.693 #52 NEW cov: 12246 ft: 15368 corp: 35/811b lim: 35 exec/s: 52 rss: 73Mb L: 14/34 MS: 1 InsertByte- 00:08:31.693 [2024-07-25 15:56:49.478529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.478551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.478601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:16000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.478612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.478662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.478672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.693 #53 NEW cov: 12246 ft: 15376 corp: 36/832b lim: 35 exec/s: 53 rss: 73Mb L: 21/34 MS: 1 EraseBytes- 00:08:31.693 [2024-07-25 15:56:49.528664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.528687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.528735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.528745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.528812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.528824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.693 #54 NEW cov: 12246 ft: 15380 corp: 37/857b lim: 35 exec/s: 54 rss: 73Mb L: 25/34 MS: 1 ChangeByte- 00:08:31.693 [2024-07-25 15:56:49.578789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.578811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.578880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80800001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.578891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.578938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.578948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.693 #55 NEW cov: 12246 ft: 15398 corp: 38/880b lim: 35 exec/s: 55 rss: 73Mb L: 23/34 MS: 1 InsertRepeatedBytes- 00:08:31.693 [2024-07-25 15:56:49.618766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fffff0ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.618788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.618837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f100f800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.618847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.693 #56 NEW cov: 12246 ft: 15403 corp: 39/894b lim: 35 exec/s: 56 rss: 73Mb L: 14/34 MS: 1 ChangeBinInt- 00:08:31.693 [2024-07-25 15:56:49.669197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000000f1 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.669219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.669284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f10000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.669295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.669344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00003b00 cdw11:00300000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.669354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.693 [2024-07-25 15:56:49.669402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.693 [2024-07-25 15:56:49.669412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.952 #57 NEW cov: 12246 ft: 15415 corp: 40/928b lim: 35 exec/s: 57 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:08:31.953 [2024-07-25 15:56:49.719044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0300f0d5 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.719066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.953 [2024-07-25 15:56:49.719116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f1000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.719127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.953 #58 NEW cov: 12246 ft: 15425 corp: 41/942b lim: 35 exec/s: 58 rss: 74Mb L: 14/34 MS: 1 PersAutoDict- DE: "\325\003\000\000\000\000\000\000"- 00:08:31.953 [2024-07-25 15:56:49.769448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f140 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.769474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.953 [2024-07-25 15:56:49.769523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:08480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.769534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.953 [2024-07-25 15:56:49.769583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff48ffff cdw11:48480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.769593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.953 [2024-07-25 15:56:49.769639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:48484848 cdw11:48000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.769649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.953 #59 NEW cov: 12246 ft: 15432 corp: 42/972b lim: 35 exec/s: 59 rss: 74Mb L: 30/34 MS: 1 ChangeBit- 00:08:31.953 [2024-07-25 15:56:49.819460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000f100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.819482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.953 [2024-07-25 15:56:49.819547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.819558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.953 [2024-07-25 15:56:49.819605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:31.953 [2024-07-25 15:56:49.819615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.953 #60 NEW cov: 12246 ft: 15464 corp: 43/993b lim: 35 exec/s: 30 rss: 74Mb L: 21/34 MS: 1 CrossOver- 00:08:31.953 #60 DONE cov: 12246 ft: 15464 corp: 43/993b lim: 35 exec/s: 30 rss: 74Mb 00:08:31.953 ###### Recommended dictionary. ###### 00:08:31.953 "\325\003\000\000\000\000\000\000" # Uses: 1 00:08:31.953 ###### End of recommended dictionary. ###### 00:08:31.953 Done 60 runs in 2 second(s) 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:32.212 15:56:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:08:32.212 [2024-07-25 15:56:50.011536] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:32.212 [2024-07-25 15:56:50.011612] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167041 ] 00:08:32.212 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.471 [2024-07-25 15:56:50.203606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.471 [2024-07-25 15:56:50.270507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.471 [2024-07-25 15:56:50.329555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.471 [2024-07-25 15:56:50.345811] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:08:32.471 INFO: Running with entropic power schedule (0xFF, 100). 00:08:32.471 INFO: Seed: 1906610189 00:08:32.471 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:32.471 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:32.471 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:32.471 INFO: A corpus is not provided, starting from an empty corpus 00:08:32.471 #2 INITED exec/s: 0 rss: 63Mb 00:08:32.471 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:32.471 This may also happen if the target rejected all inputs we tried so far 00:08:32.471 [2024-07-25 15:56:50.394004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdbd0a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.471 [2024-07-25 15:56:50.394048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.471 [2024-07-25 15:56:50.394123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.471 [2024-07-25 15:56:50.394145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.471 [2024-07-25 15:56:50.394218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.471 [2024-07-25 15:56:50.394237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.730 NEW_FUNC[1/701]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:08:32.730 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:32.730 #10 NEW cov: 12030 ft: 12029 corp: 2/36b lim: 45 exec/s: 0 rss: 70Mb L: 35/35 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:08:32.730 [2024-07-25 15:56:50.554279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.554327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.730 [2024-07-25 15:56:50.554408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.554429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.730 [2024-07-25 15:56:50.554499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.554518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.730 #11 NEW cov: 12143 ft: 12629 corp: 3/71b lim: 45 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:32.730 [2024-07-25 15:56:50.594013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdbd0a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.594037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.730 [2024-07-25 15:56:50.594090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.594102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.730 [2024-07-25 15:56:50.594152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.594162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.730 #12 NEW cov: 12149 ft: 12900 corp: 4/106b lim: 45 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:32.730 [2024-07-25 15:56:50.643820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00003f02 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.643845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.730 #16 NEW cov: 12234 ft: 13844 corp: 5/123b lim: 45 exec/s: 0 rss: 70Mb L: 17/35 MS: 4 ChangeBit-CopyPart-InsertByte-CrossOver- 00:08:32.730 [2024-07-25 15:56:50.683942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.730 [2024-07-25 15:56:50.683966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.730 #18 NEW cov: 12234 ft: 14017 corp: 6/138b lim: 45 exec/s: 0 rss: 70Mb L: 15/35 MS: 2 ShuffleBytes-CrossOver- 00:08:32.989 [2024-07-25 15:56:50.724206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00003f02 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.724230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.989 [2024-07-25 15:56:50.724282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.724293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.989 #19 NEW cov: 12234 ft: 14292 corp: 7/156b lim: 45 exec/s: 0 rss: 70Mb L: 18/35 MS: 1 InsertByte- 00:08:32.989 [2024-07-25 15:56:50.774206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.774229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.989 #20 NEW cov: 12234 ft: 14394 corp: 8/171b lim: 45 exec/s: 0 rss: 71Mb L: 15/35 MS: 1 ChangeBinInt- 00:08:32.989 [2024-07-25 15:56:50.824314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.824336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.989 #21 NEW cov: 12234 ft: 14419 corp: 9/186b lim: 45 exec/s: 0 rss: 71Mb L: 15/35 MS: 1 ChangeBinInt- 00:08:32.989 [2024-07-25 15:56:50.874594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffbcfd cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.874617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.989 [2024-07-25 15:56:50.874669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.874680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.989 #22 NEW cov: 12234 ft: 14491 corp: 10/204b lim: 45 exec/s: 0 rss: 71Mb L: 18/35 MS: 1 ChangeBinInt- 00:08:32.989 [2024-07-25 15:56:50.924552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.924574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.989 #28 NEW cov: 12234 ft: 14541 corp: 11/219b lim: 45 exec/s: 0 rss: 71Mb L: 15/35 MS: 1 ShuffleBytes- 00:08:32.989 [2024-07-25 15:56:50.964702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:32.989 [2024-07-25 15:56:50.964724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.249 #29 NEW cov: 12234 ft: 14589 corp: 12/234b lim: 45 exec/s: 0 rss: 71Mb L: 15/35 MS: 1 ShuffleBytes- 00:08:33.249 [2024-07-25 15:56:51.005103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdbd0a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.005125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.005196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.005207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.005259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.005270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.249 #30 NEW cov: 12234 ft: 14598 corp: 13/265b lim: 45 exec/s: 0 rss: 71Mb L: 31/35 MS: 1 EraseBytes- 00:08:33.249 [2024-07-25 15:56:51.055105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffbcfd cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.055127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.055180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.055191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.249 #31 NEW cov: 12234 ft: 14614 corp: 14/284b lim: 45 exec/s: 0 rss: 71Mb L: 19/35 MS: 1 CrossOver- 00:08:33.249 [2024-07-25 15:56:51.105282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00170006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.105304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.105356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d600aab8 cdw11:000f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.105367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.249 #32 NEW cov: 12234 ft: 14621 corp: 15/307b lim: 45 exec/s: 0 rss: 71Mb L: 23/35 MS: 1 CMP- DE: "\000\027\331Z\203\252\270\326"- 00:08:33.249 [2024-07-25 15:56:51.155723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdbd0a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.155746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.155815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.155827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.155880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0017bdbd cdw11:d95a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.155891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.155941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:bdbdd6bd cdw11:bd430002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.155952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.249 #33 NEW cov: 12234 ft: 14971 corp: 16/346b lim: 45 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 PersAutoDict- DE: "\000\027\331Z\203\252\270\326"- 00:08:33.249 [2024-07-25 15:56:51.205873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00f6 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.205895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.205963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.205974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.206027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.206037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.249 [2024-07-25 15:56:51.206088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.249 [2024-07-25 15:56:51.206099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.508 #34 NEW cov: 12234 ft: 14980 corp: 17/385b lim: 45 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CMP- DE: "\366\377\377\377"- 00:08:33.508 [2024-07-25 15:56:51.255483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.508 [2024-07-25 15:56:51.255504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.508 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:33.508 #35 NEW cov: 12257 ft: 15025 corp: 18/401b lim: 45 exec/s: 0 rss: 72Mb L: 16/39 MS: 1 InsertByte- 00:08:33.508 [2024-07-25 15:56:51.315798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffbcfd cdw11:2b2b0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.508 [2024-07-25 15:56:51.315820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.509 [2024-07-25 15:56:51.315873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.315884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.509 #36 NEW cov: 12257 ft: 15062 corp: 19/423b lim: 45 exec/s: 0 rss: 72Mb L: 22/39 MS: 1 InsertRepeatedBytes- 00:08:33.509 [2024-07-25 15:56:51.356323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffbcfd cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.356345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.509 [2024-07-25 15:56:51.356396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.356406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.509 [2024-07-25 15:56:51.356457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.356468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.509 [2024-07-25 15:56:51.356519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.356529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.509 #37 NEW cov: 12257 ft: 15105 corp: 20/465b lim: 45 exec/s: 37 rss: 72Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:08:33.509 [2024-07-25 15:56:51.405929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.405951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.509 #38 NEW cov: 12257 ft: 15136 corp: 21/481b lim: 45 exec/s: 38 rss: 72Mb L: 16/42 MS: 1 ShuffleBytes- 00:08:33.509 [2024-07-25 15:56:51.456233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.456255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.509 [2024-07-25 15:56:51.456322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.509 [2024-07-25 15:56:51.456333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.509 #39 NEW cov: 12257 ft: 15145 corp: 22/505b lim: 45 exec/s: 39 rss: 72Mb L: 24/42 MS: 1 InsertRepeatedBytes- 00:08:33.768 [2024-07-25 15:56:51.506563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.506585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.506639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252c25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.506650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.506701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.506712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.768 #40 NEW cov: 12257 ft: 15149 corp: 23/540b lim: 45 exec/s: 40 rss: 72Mb L: 35/42 MS: 1 InsertRepeatedBytes- 00:08:33.768 [2024-07-25 15:56:51.546835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00f6 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.546856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.546924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.546946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.546995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.547005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.547054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.547064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.768 #41 NEW cov: 12257 ft: 15193 corp: 24/579b lim: 45 exec/s: 41 rss: 72Mb L: 39/42 MS: 1 ChangeBinInt- 00:08:33.768 [2024-07-25 15:56:51.596493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.596515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.768 #42 NEW cov: 12257 ft: 15196 corp: 25/595b lim: 45 exec/s: 42 rss: 72Mb L: 16/42 MS: 1 ChangeBit- 00:08:33.768 [2024-07-25 15:56:51.647003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdbd0a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.647026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.647094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.647105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.647153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.647164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.768 #43 NEW cov: 12257 ft: 15212 corp: 26/626b lim: 45 exec/s: 43 rss: 72Mb L: 31/42 MS: 1 ChangeBit- 00:08:33.768 [2024-07-25 15:56:51.687084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.687108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.687178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.687189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.687240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.687251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.768 #44 NEW cov: 12257 ft: 15262 corp: 27/654b lim: 45 exec/s: 44 rss: 72Mb L: 28/42 MS: 1 EraseBytes- 00:08:33.768 [2024-07-25 15:56:51.727374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffbcfd cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.727397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.727449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.727461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.727511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.727521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.768 [2024-07-25 15:56:51.727573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.768 [2024-07-25 15:56:51.727583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.028 #45 NEW cov: 12257 ft: 15306 corp: 28/696b lim: 45 exec/s: 45 rss: 72Mb L: 42/42 MS: 1 ShuffleBytes- 00:08:34.028 [2024-07-25 15:56:51.777497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdb90a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.777518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.777586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.777597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.777648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0017bdbd cdw11:d95a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.777658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.777707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:bdbdd6bd cdw11:bd430002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.777717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.028 #46 NEW cov: 12257 ft: 15374 corp: 29/735b lim: 45 exec/s: 46 rss: 72Mb L: 39/42 MS: 1 ChangeBit- 00:08:34.028 [2024-07-25 15:56:51.827643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdb90a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.827668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.827719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.827730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.827783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0017bdbd cdw11:00170006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.827794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.827844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:d6bdaab8 cdw11:bd430002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.827855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.028 #47 NEW cov: 12257 ft: 15388 corp: 30/774b lim: 45 exec/s: 47 rss: 72Mb L: 39/42 MS: 1 PersAutoDict- DE: "\000\027\331Z\203\252\270\326"- 00:08:34.028 [2024-07-25 15:56:51.877453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fffffcfd cdw11:2b2b0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.877474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.877542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.877552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.028 #48 NEW cov: 12257 ft: 15405 corp: 31/796b lim: 45 exec/s: 48 rss: 73Mb L: 22/42 MS: 1 ChangeBit- 00:08:34.028 [2024-07-25 15:56:51.927968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdb90a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.927990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.928042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.928053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.928105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0017bdbd cdw11:d95a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.928116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.928167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:2700d6bd cdw11:bd430002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.928178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.028 #49 NEW cov: 12257 ft: 15457 corp: 32/835b lim: 45 exec/s: 49 rss: 73Mb L: 39/42 MS: 1 ChangeBinInt- 00:08:34.028 [2024-07-25 15:56:51.968072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdb90a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.968094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.968144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.968158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.968209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0017bdbd cdw11:00170006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.968220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:51.968269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:d6bdaa1a cdw11:bd430002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:51.968279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.028 #50 NEW cov: 12257 ft: 15485 corp: 33/874b lim: 45 exec/s: 50 rss: 73Mb L: 39/42 MS: 1 ChangeByte- 00:08:34.028 [2024-07-25 15:56:52.017842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffbcfd cdw11:2b2b0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:52.017866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.028 [2024-07-25 15:56:52.017921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.028 [2024-07-25 15:56:52.017933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.288 #51 NEW cov: 12257 ft: 15489 corp: 34/896b lim: 45 exec/s: 51 rss: 73Mb L: 22/42 MS: 1 ShuffleBytes- 00:08:34.288 [2024-07-25 15:56:52.058167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.058191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.058243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252c25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.058254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.058303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.058313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.288 #52 NEW cov: 12257 ft: 15510 corp: 35/931b lim: 45 exec/s: 52 rss: 73Mb L: 35/42 MS: 1 ShuffleBytes- 00:08:34.288 [2024-07-25 15:56:52.107934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.107956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.288 #53 NEW cov: 12257 ft: 15516 corp: 36/940b lim: 45 exec/s: 53 rss: 73Mb L: 9/42 MS: 1 EraseBytes- 00:08:34.288 [2024-07-25 15:56:52.148522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff93bcfd cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.148544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.148598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.148608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.148658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.148672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.148721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.148732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.288 #54 NEW cov: 12257 ft: 15530 corp: 37/983b lim: 45 exec/s: 54 rss: 73Mb L: 43/43 MS: 1 CopyPart- 00:08:34.288 [2024-07-25 15:56:52.198166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.198189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.288 #55 NEW cov: 12257 ft: 15539 corp: 38/992b lim: 45 exec/s: 55 rss: 73Mb L: 9/43 MS: 1 ShuffleBytes- 00:08:34.288 [2024-07-25 15:56:52.248830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bdb90a10 cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.248853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.248922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbd0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.248933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.248983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0017bdbd cdw11:00170006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.248994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.288 [2024-07-25 15:56:52.249054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:d6bdaab8 cdw11:bd2d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.288 [2024-07-25 15:56:52.249064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.288 #56 NEW cov: 12257 ft: 15541 corp: 39/1031b lim: 45 exec/s: 56 rss: 73Mb L: 39/43 MS: 1 ChangeByte- 00:08:34.547 [2024-07-25 15:56:52.288607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffbcfd cdw11:2b2b0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.547 [2024-07-25 15:56:52.288630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.547 [2024-07-25 15:56:52.288696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.547 [2024-07-25 15:56:52.288707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.547 #57 NEW cov: 12257 ft: 15587 corp: 40/1049b lim: 45 exec/s: 57 rss: 73Mb L: 18/43 MS: 1 EraseBytes- 00:08:34.547 [2024-07-25 15:56:52.328920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.547 [2024-07-25 15:56:52.328943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.547 [2024-07-25 15:56:52.329013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.547 [2024-07-25 15:56:52.329024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.547 [2024-07-25 15:56:52.329076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.547 [2024-07-25 15:56:52.329086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.547 #58 NEW cov: 12257 ft: 15593 corp: 41/1078b lim: 45 exec/s: 58 rss: 74Mb L: 29/43 MS: 1 InsertByte- 00:08:34.547 [2024-07-25 15:56:52.378699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.547 [2024-07-25 15:56:52.378720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.547 [2024-07-25 15:56:52.418796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.547 [2024-07-25 15:56:52.418818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.547 #60 NEW cov: 12257 ft: 15635 corp: 42/1094b lim: 45 exec/s: 30 rss: 74Mb L: 16/43 MS: 2 CrossOver-InsertByte- 00:08:34.547 #60 DONE cov: 12257 ft: 15635 corp: 42/1094b lim: 45 exec/s: 30 rss: 74Mb 00:08:34.547 ###### Recommended dictionary. ###### 00:08:34.547 "\000\027\331Z\203\252\270\326" # Uses: 2 00:08:34.547 "\366\377\377\377" # Uses: 0 00:08:34.547 ###### End of recommended dictionary. ###### 00:08:34.547 Done 60 runs in 2 second(s) 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:34.806 15:56:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:08:34.807 [2024-07-25 15:56:52.597137] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:34.807 [2024-07-25 15:56:52.597197] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167477 ] 00:08:34.807 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.807 [2024-07-25 15:56:52.776721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.065 [2024-07-25 15:56:52.841918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.065 [2024-07-25 15:56:52.900538] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.065 [2024-07-25 15:56:52.916776] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:08:35.065 INFO: Running with entropic power schedule (0xFF, 100). 00:08:35.065 INFO: Seed: 183638622 00:08:35.065 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:35.065 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:35.065 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:35.065 INFO: A corpus is not provided, starting from an empty corpus 00:08:35.065 #2 INITED exec/s: 0 rss: 63Mb 00:08:35.065 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:35.065 This may also happen if the target rejected all inputs we tried so far 00:08:35.065 [2024-07-25 15:56:52.961434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004747 cdw11:00000000 00:08:35.065 [2024-07-25 15:56:52.961466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.324 NEW_FUNC[1/699]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:08:35.324 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:35.324 #5 NEW cov: 11944 ft: 11933 corp: 2/3b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 3 ChangeByte-CopyPart-CopyPart- 00:08:35.324 [2024-07-25 15:56:53.141871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003333 cdw11:00000000 00:08:35.324 [2024-07-25 15:56:53.141908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.324 #8 NEW cov: 12060 ft: 12575 corp: 3/5b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 3 ChangeByte-CopyPart-CopyPart- 00:08:35.324 [2024-07-25 15:56:53.201950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:08:35.324 [2024-07-25 15:56:53.201979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.324 #11 NEW cov: 12066 ft: 12806 corp: 4/7b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 3 ChangeBit-CopyPart-CrossOver- 00:08:35.324 [2024-07-25 15:56:53.252061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003350 cdw11:00000000 00:08:35.324 [2024-07-25 15:56:53.252089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.581 #12 NEW cov: 12151 ft: 13005 corp: 5/10b lim: 10 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 InsertByte- 00:08:35.581 [2024-07-25 15:56:53.332264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b01 cdw11:00000000 00:08:35.581 [2024-07-25 15:56:53.332290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.581 #15 NEW cov: 12151 ft: 13113 corp: 6/12b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 3 ShuffleBytes-ChangeBinInt-InsertByte- 00:08:35.581 [2024-07-25 15:56:53.382380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005033 cdw11:00000000 00:08:35.581 [2024-07-25 15:56:53.382408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.581 #16 NEW cov: 12151 ft: 13157 corp: 7/15b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CopyPart- 00:08:35.581 [2024-07-25 15:56:53.462608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:08:35.581 [2024-07-25 15:56:53.462639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.581 #17 NEW cov: 12151 ft: 13205 corp: 8/17b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ShuffleBytes- 00:08:35.581 [2024-07-25 15:56:53.542825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:08:35.581 [2024-07-25 15:56:53.542854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.840 #18 NEW cov: 12151 ft: 13212 corp: 9/19b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeBit- 00:08:35.840 [2024-07-25 15:56:53.592938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001df0 cdw11:00000000 00:08:35.840 [2024-07-25 15:56:53.592967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.840 #20 NEW cov: 12151 ft: 13227 corp: 10/21b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 2 ChangeByte-InsertByte- 00:08:35.840 [2024-07-25 15:56:53.643127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005033 cdw11:00000000 00:08:35.840 [2024-07-25 15:56:53.643155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.840 #21 NEW cov: 12151 ft: 13265 corp: 11/24b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ShuffleBytes- 00:08:35.840 [2024-07-25 15:56:53.723299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 00:08:35.840 [2024-07-25 15:56:53.723326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.840 #22 NEW cov: 12151 ft: 13338 corp: 12/26b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 CMP- DE: "\001\001"- 00:08:35.840 [2024-07-25 15:56:53.803489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 00:08:35.840 [2024-07-25 15:56:53.803515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.099 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:36.099 #23 NEW cov: 12168 ft: 13476 corp: 13/28b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 CopyPart- 00:08:36.099 [2024-07-25 15:56:53.883710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004728 cdw11:00000000 00:08:36.099 [2024-07-25 15:56:53.883738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.099 #24 NEW cov: 12168 ft: 13485 corp: 14/30b lim: 10 exec/s: 24 rss: 72Mb L: 2/3 MS: 1 ChangeByte- 00:08:36.099 [2024-07-25 15:56:53.963899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007a0a cdw11:00000000 00:08:36.099 [2024-07-25 15:56:53.963925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.099 #25 NEW cov: 12168 ft: 13503 corp: 15/33b lim: 10 exec/s: 25 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:08:36.099 [2024-07-25 15:56:54.014065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004728 cdw11:00000000 00:08:36.099 [2024-07-25 15:56:54.014092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.099 #26 NEW cov: 12168 ft: 13532 corp: 16/36b lim: 10 exec/s: 26 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:08:36.358 [2024-07-25 15:56:54.094272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000502a cdw11:00000000 00:08:36.358 [2024-07-25 15:56:54.094297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.358 #27 NEW cov: 12168 ft: 13580 corp: 17/39b lim: 10 exec/s: 27 rss: 72Mb L: 3/3 MS: 1 ChangeByte- 00:08:36.358 [2024-07-25 15:56:54.144399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003351 cdw11:00000000 00:08:36.358 [2024-07-25 15:56:54.144424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.358 #28 NEW cov: 12168 ft: 13680 corp: 18/42b lim: 10 exec/s: 28 rss: 72Mb L: 3/3 MS: 1 ChangeBit- 00:08:36.358 [2024-07-25 15:56:54.194481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:36.358 [2024-07-25 15:56:54.194507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.358 #29 NEW cov: 12168 ft: 13712 corp: 19/44b lim: 10 exec/s: 29 rss: 72Mb L: 2/3 MS: 1 ChangeBit- 00:08:36.358 [2024-07-25 15:56:54.274702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:08:36.358 [2024-07-25 15:56:54.274729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.358 #30 NEW cov: 12168 ft: 13753 corp: 20/46b lim: 10 exec/s: 30 rss: 72Mb L: 2/3 MS: 1 ShuffleBytes- 00:08:36.616 [2024-07-25 15:56:54.354960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:08:36.616 [2024-07-25 15:56:54.354986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.616 #31 NEW cov: 12168 ft: 13785 corp: 21/48b lim: 10 exec/s: 31 rss: 72Mb L: 2/3 MS: 1 CopyPart- 00:08:36.616 [2024-07-25 15:56:54.435162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000472b cdw11:00000000 00:08:36.616 [2024-07-25 15:56:54.435188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.616 #32 NEW cov: 12168 ft: 13822 corp: 22/51b lim: 10 exec/s: 32 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:08:36.616 [2024-07-25 15:56:54.515370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 00:08:36.616 [2024-07-25 15:56:54.515395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.616 #33 NEW cov: 12168 ft: 13834 corp: 23/53b lim: 10 exec/s: 33 rss: 72Mb L: 2/3 MS: 1 ShuffleBytes- 00:08:36.616 [2024-07-25 15:56:54.565464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001df0 cdw11:00000000 00:08:36.616 [2024-07-25 15:56:54.565489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.875 #34 NEW cov: 12168 ft: 13842 corp: 24/55b lim: 10 exec/s: 34 rss: 73Mb L: 2/3 MS: 1 ShuffleBytes- 00:08:36.875 [2024-07-25 15:56:54.645755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004701 cdw11:00000000 00:08:36.875 [2024-07-25 15:56:54.645787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.875 [2024-07-25 15:56:54.645815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000147 cdw11:00000000 00:08:36.875 [2024-07-25 15:56:54.645828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.875 #35 NEW cov: 12168 ft: 14047 corp: 25/59b lim: 10 exec/s: 35 rss: 73Mb L: 4/4 MS: 1 PersAutoDict- DE: "\001\001"- 00:08:36.875 [2024-07-25 15:56:54.705856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000901 cdw11:00000000 00:08:36.875 [2024-07-25 15:56:54.705885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.875 #36 NEW cov: 12168 ft: 14064 corp: 26/61b lim: 10 exec/s: 36 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:08:36.875 [2024-07-25 15:56:54.786093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000809 cdw11:00000000 00:08:36.875 [2024-07-25 15:56:54.786125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.875 #37 NEW cov: 12168 ft: 14072 corp: 27/63b lim: 10 exec/s: 37 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:08:36.875 [2024-07-25 15:56:54.846244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001df0 cdw11:00000000 00:08:36.875 [2024-07-25 15:56:54.846272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.135 #38 NEW cov: 12175 ft: 14084 corp: 28/65b lim: 10 exec/s: 38 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:37.135 [2024-07-25 15:56:54.926524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003350 cdw11:00000000 00:08:37.135 [2024-07-25 15:56:54.926550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.135 [2024-07-25 15:56:54.926593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003333 cdw11:00000000 00:08:37.135 [2024-07-25 15:56:54.926605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.135 [2024-07-25 15:56:54.926630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005033 cdw11:00000000 00:08:37.135 [2024-07-25 15:56:54.926643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.135 #39 NEW cov: 12175 ft: 14313 corp: 29/71b lim: 10 exec/s: 19 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:08:37.135 #39 DONE cov: 12175 ft: 14313 corp: 29/71b lim: 10 exec/s: 19 rss: 73Mb 00:08:37.135 ###### Recommended dictionary. ###### 00:08:37.135 "\001\001" # Uses: 1 00:08:37.135 ###### End of recommended dictionary. ###### 00:08:37.135 Done 39 runs in 2 second(s) 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:37.135 15:56:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:08:37.135 [2024-07-25 15:56:55.121222] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:37.135 [2024-07-25 15:56:55.121298] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167908 ] 00:08:37.394 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.394 [2024-07-25 15:56:55.304577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.394 [2024-07-25 15:56:55.368968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.653 [2024-07-25 15:56:55.427728] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.653 [2024-07-25 15:56:55.443960] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:08:37.653 INFO: Running with entropic power schedule (0xFF, 100). 00:08:37.653 INFO: Seed: 2711629915 00:08:37.653 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:37.653 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:37.653 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:37.653 INFO: A corpus is not provided, starting from an empty corpus 00:08:37.653 #2 INITED exec/s: 0 rss: 64Mb 00:08:37.653 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:37.653 This may also happen if the target rejected all inputs we tried so far 00:08:37.653 [2024-07-25 15:56:55.499361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:37.653 [2024-07-25 15:56:55.499386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.653 [2024-07-25 15:56:55.499450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000eded cdw11:00000000 00:08:37.653 [2024-07-25 15:56:55.499460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.653 NEW_FUNC[1/696]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:08:37.654 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:37.654 #8 NEW cov: 11923 ft: 11922 corp: 2/6b lim: 10 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:08:37.913 [2024-07-25 15:56:55.650095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed16 cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.650144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.913 [2024-07-25 15:56:55.650216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000eded cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.650235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.913 NEW_FUNC[1/3]: 0x161de40 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3926 00:08:37.913 NEW_FUNC[2/3]: 0x17f8180 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1565 00:08:37.913 #9 NEW cov: 12060 ft: 12661 corp: 3/11b lim: 10 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBinInt- 00:08:37.913 [2024-07-25 15:56:55.719948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.719972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.913 [2024-07-25 15:56:55.720037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.720051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.913 #10 NEW cov: 12066 ft: 12938 corp: 4/16b lim: 10 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:08:37.913 [2024-07-25 15:56:55.760172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.760194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.913 [2024-07-25 15:56:55.760260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005aed cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.760271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.913 [2024-07-25 15:56:55.760319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.760329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.913 #11 NEW cov: 12151 ft: 13477 corp: 5/22b lim: 10 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 InsertByte- 00:08:37.913 [2024-07-25 15:56:55.800154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001312 cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.800176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.913 [2024-07-25 15:56:55.800224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001219 cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.800235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.913 #12 NEW cov: 12151 ft: 13524 corp: 6/27b lim: 10 exec/s: 0 rss: 71Mb L: 5/6 MS: 1 ChangeBinInt- 00:08:37.913 [2024-07-25 15:56:55.840262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ede0 cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.840283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.913 [2024-07-25 15:56:55.840350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000eded cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.840361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.913 #13 NEW cov: 12151 ft: 13607 corp: 7/32b lim: 10 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 ChangeBinInt- 00:08:37.913 [2024-07-25 15:56:55.890312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000002c cdw11:00000000 00:08:37.913 [2024-07-25 15:56:55.890334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.173 #15 NEW cov: 12151 ft: 13866 corp: 8/34b lim: 10 exec/s: 0 rss: 72Mb L: 2/6 MS: 2 ChangeBinInt-InsertByte- 00:08:38.173 [2024-07-25 15:56:55.930508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001312 cdw11:00000000 00:08:38.173 [2024-07-25 15:56:55.930529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:55.930593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001219 cdw11:00000000 00:08:38.173 [2024-07-25 15:56:55.930604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.173 #16 NEW cov: 12151 ft: 13890 corp: 9/39b lim: 10 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 ShuffleBytes- 00:08:38.173 [2024-07-25 15:56:55.980784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ede0 cdw11:00000000 00:08:38.173 [2024-07-25 15:56:55.980806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:55.980858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005ded cdw11:00000000 00:08:38.173 [2024-07-25 15:56:55.980869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:55.980917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.173 [2024-07-25 15:56:55.980928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.173 #17 NEW cov: 12151 ft: 13915 corp: 10/45b lim: 10 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 InsertByte- 00:08:38.173 [2024-07-25 15:56:56.030712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000002c cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.030735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.173 #18 NEW cov: 12151 ft: 13984 corp: 11/47b lim: 10 exec/s: 0 rss: 72Mb L: 2/6 MS: 1 CopyPart- 00:08:38.173 [2024-07-25 15:56:56.081252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed16 cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.081275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:56.081324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009191 cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.081335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:56.081382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000091ed cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.081393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:56.081443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.081454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:38.173 #19 NEW cov: 12151 ft: 14200 corp: 12/55b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:08:38.173 [2024-07-25 15:56:56.121351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed16 cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.121374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:56.121439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009191 cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.121450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:56.121501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000091ed cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.121512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.173 [2024-07-25 15:56:56.121560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00007eed cdw11:00000000 00:08:38.173 [2024-07-25 15:56:56.121570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:38.173 #20 NEW cov: 12151 ft: 14268 corp: 13/64b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertByte- 00:08:38.432 [2024-07-25 15:56:56.171345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000013 cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.171368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.171435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002c12 cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.171447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.171495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001219 cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.171506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.432 #21 NEW cov: 12151 ft: 14329 corp: 14/70b lim: 10 exec/s: 0 rss: 72Mb L: 6/9 MS: 1 CrossOver- 00:08:38.432 [2024-07-25 15:56:56.211442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003aed cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.211464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.211514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000016ed cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.211524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.211572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.211582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.432 #22 NEW cov: 12151 ft: 14346 corp: 15/76b lim: 10 exec/s: 0 rss: 72Mb L: 6/9 MS: 1 InsertByte- 00:08:38.432 [2024-07-25 15:56:56.251704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.251727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.251775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000ed cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.251786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.251835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000016ed cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.251845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.251893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.251904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:38.432 #23 NEW cov: 12151 ft: 14362 corp: 16/84b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:08:38.432 [2024-07-25 15:56:56.291569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.291591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.291641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.291651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.432 #24 NEW cov: 12151 ft: 14406 corp: 17/89b lim: 10 exec/s: 0 rss: 72Mb L: 5/9 MS: 1 CopyPart- 00:08:38.432 [2024-07-25 15:56:56.341956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.341978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.342047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.342058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.342108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.342118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.432 [2024-07-25 15:56:56.342167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.342177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:38.432 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:38.432 #25 NEW cov: 12174 ft: 14440 corp: 18/98b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\012\000\000\000\000\000\000\000"- 00:08:38.432 [2024-07-25 15:56:56.391726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002c2c cdw11:00000000 00:08:38.432 [2024-07-25 15:56:56.391748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.432 #26 NEW cov: 12174 ft: 14489 corp: 19/100b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 ChangeByte- 00:08:38.692 [2024-07-25 15:56:56.432225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.692 [2024-07-25 15:56:56.432246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.692 [2024-07-25 15:56:56.432311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.692 [2024-07-25 15:56:56.432322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.692 [2024-07-25 15:56:56.432371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.692 [2024-07-25 15:56:56.432381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.692 [2024-07-25 15:56:56.432430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.692 [2024-07-25 15:56:56.432439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:38.692 #27 NEW cov: 12174 ft: 14514 corp: 20/108b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 EraseBytes- 00:08:38.692 [2024-07-25 15:56:56.482139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:38.692 [2024-07-25 15:56:56.482160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.693 [2024-07-25 15:56:56.482228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.482239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.693 #28 NEW cov: 12174 ft: 14519 corp: 21/113b lim: 10 exec/s: 28 rss: 72Mb L: 5/9 MS: 1 ShuffleBytes- 00:08:38.693 [2024-07-25 15:56:56.522471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000013e9 cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.522492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.693 [2024-07-25 15:56:56.522557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006e6e cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.522570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.693 [2024-07-25 15:56:56.522620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006e12 cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.522630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.693 [2024-07-25 15:56:56.522680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008112 cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.522690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:38.693 #29 NEW cov: 12174 ft: 14539 corp: 22/122b lim: 10 exec/s: 29 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:38.693 [2024-07-25 15:56:56.562319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.562340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.693 [2024-07-25 15:56:56.562390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.562401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.693 #30 NEW cov: 12174 ft: 14572 corp: 23/127b lim: 10 exec/s: 30 rss: 72Mb L: 5/9 MS: 1 ChangeBinInt- 00:08:38.693 [2024-07-25 15:56:56.612606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.612627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.693 [2024-07-25 15:56:56.612693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006719 cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.612705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.693 [2024-07-25 15:56:56.612754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.612768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.693 #31 NEW cov: 12174 ft: 14613 corp: 24/133b lim: 10 exec/s: 31 rss: 72Mb L: 6/9 MS: 1 InsertByte- 00:08:38.693 [2024-07-25 15:56:56.652494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001312 cdw11:00000000 00:08:38.693 [2024-07-25 15:56:56.652516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.952 #32 NEW cov: 12174 ft: 14647 corp: 25/136b lim: 10 exec/s: 32 rss: 72Mb L: 3/9 MS: 1 EraseBytes- 00:08:38.952 [2024-07-25 15:56:56.702874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.702897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.702946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005aed cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.702957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.703004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ed2a cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.703014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.952 #33 NEW cov: 12174 ft: 14658 corp: 26/143b lim: 10 exec/s: 33 rss: 72Mb L: 7/9 MS: 1 InsertByte- 00:08:38.952 [2024-07-25 15:56:56.753024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.753045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.753111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.753122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.753170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000005d4 cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.753181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.952 #34 NEW cov: 12174 ft: 14691 corp: 27/149b lim: 10 exec/s: 34 rss: 73Mb L: 6/9 MS: 1 InsertByte- 00:08:38.952 [2024-07-25 15:56:56.803259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.803280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.803348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.803358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.803407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.803417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.803465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.803474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:38.952 #35 NEW cov: 12174 ft: 14715 corp: 28/158b lim: 10 exec/s: 35 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:38.952 [2024-07-25 15:56:56.843234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ede0 cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.843256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.843320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a312 cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.843331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.843381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000012f5 cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.843392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.952 #36 NEW cov: 12174 ft: 14739 corp: 29/164b lim: 10 exec/s: 36 rss: 73Mb L: 6/9 MS: 1 ChangeBinInt- 00:08:38.952 [2024-07-25 15:56:56.893387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000080ed cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.893409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.952 [2024-07-25 15:56:56.893456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ed67 cdw11:00000000 00:08:38.952 [2024-07-25 15:56:56.893466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.953 [2024-07-25 15:56:56.893513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:38.953 [2024-07-25 15:56:56.893525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.953 #37 NEW cov: 12174 ft: 14751 corp: 30/170b lim: 10 exec/s: 37 rss: 73Mb L: 6/9 MS: 1 InsertByte- 00:08:39.212 [2024-07-25 15:56:56.943708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000013e9 cdw11:00000000 00:08:39.212 [2024-07-25 15:56:56.943731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.212 [2024-07-25 15:56:56.943786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006e6e cdw11:00000000 00:08:39.212 [2024-07-25 15:56:56.943797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.212 [2024-07-25 15:56:56.943848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006e12 cdw11:00000000 00:08:39.212 [2024-07-25 15:56:56.943859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.212 [2024-07-25 15:56:56.943908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000012fc cdw11:00000000 00:08:39.212 [2024-07-25 15:56:56.943919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.212 #38 NEW cov: 12174 ft: 14768 corp: 31/178b lim: 10 exec/s: 38 rss: 73Mb L: 8/9 MS: 1 ChangeBinInt- 00:08:39.212 [2024-07-25 15:56:56.983420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000002c cdw11:00000000 00:08:39.212 [2024-07-25 15:56:56.983441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.212 #39 NEW cov: 12174 ft: 14780 corp: 32/180b lim: 10 exec/s: 39 rss: 73Mb L: 2/9 MS: 1 CopyPart- 00:08:39.212 [2024-07-25 15:56:57.023903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:39.212 [2024-07-25 15:56:57.023925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.212 [2024-07-25 15:56:57.023989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:39.212 [2024-07-25 15:56:57.024000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.212 [2024-07-25 15:56:57.024050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000aed cdw11:00000000 00:08:39.212 [2024-07-25 15:56:57.024061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.212 [2024-07-25 15:56:57.024109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed5a cdw11:00000000 00:08:39.212 [2024-07-25 15:56:57.024120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.212 #40 NEW cov: 12174 ft: 14787 corp: 33/189b lim: 10 exec/s: 40 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:08:39.212 [2024-07-25 15:56:57.064139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001312 cdw11:00000000 00:08:39.212 [2024-07-25 15:56:57.064161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.064210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000012ff cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.064220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.064269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.064283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.064330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.064339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.064387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000190a cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.064397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:39.213 #41 NEW cov: 12174 ft: 14846 corp: 34/199b lim: 10 exec/s: 41 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:08:39.213 [2024-07-25 15:56:57.103872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.103894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.103958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005aed cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.103969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.213 #42 NEW cov: 12174 ft: 14872 corp: 35/204b lim: 10 exec/s: 42 rss: 73Mb L: 5/10 MS: 1 EraseBytes- 00:08:39.213 [2024-07-25 15:56:57.144237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.144260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.144325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000ed cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.144336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.144385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000016ed cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.144396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.144444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000eded cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.144454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.213 #43 NEW cov: 12174 ft: 14885 corp: 36/212b lim: 10 exec/s: 43 rss: 73Mb L: 8/10 MS: 1 CopyPart- 00:08:39.213 [2024-07-25 15:56:57.194395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.194416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.194483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.194495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.194544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000067ed cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.194555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.213 [2024-07-25 15:56:57.194604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000aed cdw11:00000000 00:08:39.213 [2024-07-25 15:56:57.194616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.472 #44 NEW cov: 12174 ft: 14909 corp: 37/221b lim: 10 exec/s: 44 rss: 73Mb L: 9/10 MS: 1 CopyPart- 00:08:39.472 [2024-07-25 15:56:57.234411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eded cdw11:00000000 00:08:39.472 [2024-07-25 15:56:57.234434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.472 [2024-07-25 15:56:57.234483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000067ed cdw11:00000000 00:08:39.472 [2024-07-25 15:56:57.234494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.472 [2024-07-25 15:56:57.234543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000052e cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.234553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.473 #45 NEW cov: 12174 ft: 14919 corp: 38/227b lim: 10 exec/s: 45 rss: 73Mb L: 6/10 MS: 1 ChangeByte- 00:08:39.473 [2024-07-25 15:56:57.284631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.284653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.284702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.284713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.284766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.284777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.284826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.284836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.473 #46 NEW cov: 12174 ft: 14928 corp: 39/235b lim: 10 exec/s: 46 rss: 73Mb L: 8/10 MS: 1 ChangeBit- 00:08:39.473 [2024-07-25 15:56:57.334646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000013 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.334668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.334735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002c12 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.334746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.334801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eb19 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.334812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.473 #47 NEW cov: 12174 ft: 14941 corp: 40/241b lim: 10 exec/s: 47 rss: 74Mb L: 6/10 MS: 1 ChangeBinInt- 00:08:39.473 [2024-07-25 15:56:57.384539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000a5 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.384563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.473 #48 NEW cov: 12174 ft: 14995 corp: 41/243b lim: 10 exec/s: 48 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:08:39.473 [2024-07-25 15:56:57.435095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ed00 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.435121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.435186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.435197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.435247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.435258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.473 [2024-07-25 15:56:57.435307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:39.473 [2024-07-25 15:56:57.435317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.473 #49 NEW cov: 12174 ft: 15004 corp: 42/252b lim: 10 exec/s: 49 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:39.732 [2024-07-25 15:56:57.475244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.732 [2024-07-25 15:56:57.475267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.732 [2024-07-25 15:56:57.475332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000ed cdw11:00000000 00:08:39.732 [2024-07-25 15:56:57.475344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.732 [2024-07-25 15:56:57.475393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000016ed cdw11:00000000 00:08:39.732 [2024-07-25 15:56:57.475403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.732 [2024-07-25 15:56:57.475454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed0a cdw11:00000000 00:08:39.732 [2024-07-25 15:56:57.475464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:39.732 #50 NEW cov: 12174 ft: 15006 corp: 43/260b lim: 10 exec/s: 25 rss: 74Mb L: 8/10 MS: 1 CrossOver- 00:08:39.732 #50 DONE cov: 12174 ft: 15006 corp: 43/260b lim: 10 exec/s: 25 rss: 74Mb 00:08:39.732 ###### Recommended dictionary. ###### 00:08:39.732 "\012\000\000\000\000\000\000\000" # Uses: 0 00:08:39.732 ###### End of recommended dictionary. ###### 00:08:39.732 Done 50 runs in 2 second(s) 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:39.733 15:56:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:08:39.733 [2024-07-25 15:56:57.650649] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:39.733 [2024-07-25 15:56:57.650727] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168292 ] 00:08:39.733 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.992 [2024-07-25 15:56:57.826074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.992 [2024-07-25 15:56:57.889809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.992 [2024-07-25 15:56:57.948429] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.992 [2024-07-25 15:56:57.964662] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:08:39.992 INFO: Running with entropic power schedule (0xFF, 100). 00:08:39.992 INFO: Seed: 936668343 00:08:40.252 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:40.252 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:40.252 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:40.252 INFO: A corpus is not provided, starting from an empty corpus 00:08:40.252 [2024-07-25 15:56:58.020050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.020078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.252 #2 INITED cov: 11969 ft: 11967 corp: 1/1b exec/s: 0 rss: 69Mb 00:08:40.252 [2024-07-25 15:56:58.060194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.060217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.252 [2024-07-25 15:56:58.060272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.060283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.252 #3 NEW cov: 12088 ft: 13235 corp: 2/3b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 InsertByte- 00:08:40.252 [2024-07-25 15:56:58.110347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.110370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.252 [2024-07-25 15:56:58.110424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.110437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.252 #4 NEW cov: 12094 ft: 13411 corp: 3/5b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 InsertByte- 00:08:40.252 [2024-07-25 15:56:58.150771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.150794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.252 [2024-07-25 15:56:58.150867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.150878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.252 [2024-07-25 15:56:58.150932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.150943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.252 [2024-07-25 15:56:58.150995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.151006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.252 #5 NEW cov: 12179 ft: 13953 corp: 4/9b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 CrossOver- 00:08:40.252 [2024-07-25 15:56:58.200589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.200611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.252 [2024-07-25 15:56:58.200663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.252 [2024-07-25 15:56:58.200674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.252 #6 NEW cov: 12179 ft: 14016 corp: 5/11b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeBinInt- 00:08:40.511 [2024-07-25 15:56:58.251058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.251081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.251150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.251160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.251214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.251224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.251276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.251287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.511 #7 NEW cov: 12179 ft: 14086 corp: 6/15b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeBit- 00:08:40.511 [2024-07-25 15:56:58.300885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.300911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.300966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.300978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.511 #8 NEW cov: 12179 ft: 14190 corp: 7/17b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:40.511 [2024-07-25 15:56:58.340982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.341003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.341073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.341084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.511 #9 NEW cov: 12179 ft: 14249 corp: 8/19b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CrossOver- 00:08:40.511 [2024-07-25 15:56:58.381126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.381148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.381217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.381229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.511 #10 NEW cov: 12179 ft: 14299 corp: 9/21b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeByte- 00:08:40.511 [2024-07-25 15:56:58.421558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.421580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.421650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.421661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.511 [2024-07-25 15:56:58.421713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.511 [2024-07-25 15:56:58.421723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.512 [2024-07-25 15:56:58.421777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.512 [2024-07-25 15:56:58.421788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.512 #11 NEW cov: 12179 ft: 14413 corp: 10/25b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:40.512 [2024-07-25 15:56:58.461326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.512 [2024-07-25 15:56:58.461352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.512 [2024-07-25 15:56:58.461421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.512 [2024-07-25 15:56:58.461432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.512 #12 NEW cov: 12179 ft: 14456 corp: 11/27b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:08:40.771 [2024-07-25 15:56:58.511327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.511349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.771 #13 NEW cov: 12179 ft: 14527 corp: 12/28b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:08:40.771 [2024-07-25 15:56:58.562122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.562144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.562216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.562227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.562280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.562291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.562343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.562354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.562410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.562420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:40.771 #14 NEW cov: 12179 ft: 14644 corp: 13/33b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:08:40.771 [2024-07-25 15:56:58.611722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.611744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.611819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.611830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.771 #15 NEW cov: 12179 ft: 14657 corp: 14/35b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeBit- 00:08:40.771 [2024-07-25 15:56:58.652396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.652418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.652474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.652488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.652540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.652550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.652603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.652613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.771 [2024-07-25 15:56:58.652666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.652677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:40.771 #16 NEW cov: 12179 ft: 14699 corp: 15/40b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:08:40.771 [2024-07-25 15:56:58.701859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.701882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.771 #17 NEW cov: 12179 ft: 14708 corp: 16/41b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:08:40.771 [2024-07-25 15:56:58.741990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.771 [2024-07-25 15:56:58.742013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.030 #18 NEW cov: 12179 ft: 14718 corp: 17/42b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:08:41.030 [2024-07-25 15:56:58.782410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.782434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.030 [2024-07-25 15:56:58.782489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.782500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.030 [2024-07-25 15:56:58.782552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.782562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.030 #19 NEW cov: 12179 ft: 14883 corp: 18/45b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 InsertByte- 00:08:41.030 [2024-07-25 15:56:58.822173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.822197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.030 #20 NEW cov: 12179 ft: 14913 corp: 19/46b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:08:41.030 [2024-07-25 15:56:58.872829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.872855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.030 [2024-07-25 15:56:58.872926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.872937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.030 [2024-07-25 15:56:58.872990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.873001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.030 [2024-07-25 15:56:58.873054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.030 [2024-07-25 15:56:58.873065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.030 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:41.030 #21 NEW cov: 12202 ft: 14952 corp: 20/50b lim: 5 exec/s: 21 rss: 71Mb L: 4/5 MS: 1 ChangeByte- 00:08:41.290 [2024-07-25 15:56:59.023595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.023631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.023693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.023706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.023769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.023782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.023841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.023853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.023912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.023924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.290 #22 NEW cov: 12202 ft: 14960 corp: 21/55b lim: 5 exec/s: 22 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:08:41.290 [2024-07-25 15:56:59.063207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.063232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.063288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.063299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.063352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.063365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.290 #23 NEW cov: 12202 ft: 14988 corp: 22/58b lim: 5 exec/s: 23 rss: 71Mb L: 3/5 MS: 1 ShuffleBytes- 00:08:41.290 [2024-07-25 15:56:59.113508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.113531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.113604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.113615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.113669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.113679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.113733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.113743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.290 #24 NEW cov: 12202 ft: 15037 corp: 23/62b lim: 5 exec/s: 24 rss: 71Mb L: 4/5 MS: 1 ChangeByte- 00:08:41.290 [2024-07-25 15:56:59.153421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.153444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.153517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.153528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.153583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.153594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.290 #25 NEW cov: 12202 ft: 15073 corp: 24/65b lim: 5 exec/s: 25 rss: 71Mb L: 3/5 MS: 1 CopyPart- 00:08:41.290 [2024-07-25 15:56:59.193392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.193414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.193484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.193496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.290 #26 NEW cov: 12202 ft: 15077 corp: 25/67b lim: 5 exec/s: 26 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:08:41.290 [2024-07-25 15:56:59.233690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.233713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.233776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.233803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.290 [2024-07-25 15:56:59.233857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.233867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.290 #27 NEW cov: 12202 ft: 15102 corp: 26/70b lim: 5 exec/s: 27 rss: 71Mb L: 3/5 MS: 1 EraseBytes- 00:08:41.290 [2024-07-25 15:56:59.273485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.290 [2024-07-25 15:56:59.273508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.550 #28 NEW cov: 12202 ft: 15146 corp: 27/71b lim: 5 exec/s: 28 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:08:41.550 [2024-07-25 15:56:59.324137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.324160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.324217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.324228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.324283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.324294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.324349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.324360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.550 #29 NEW cov: 12202 ft: 15159 corp: 28/75b lim: 5 exec/s: 29 rss: 72Mb L: 4/5 MS: 1 ChangeBit- 00:08:41.550 [2024-07-25 15:56:59.373926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.373948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.374020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.374032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.550 #30 NEW cov: 12202 ft: 15163 corp: 29/77b lim: 5 exec/s: 30 rss: 72Mb L: 2/5 MS: 1 ChangeBinInt- 00:08:41.550 [2024-07-25 15:56:59.414360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.414383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.414439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.414453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.414506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.414516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.414568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.414579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.550 #31 NEW cov: 12202 ft: 15168 corp: 30/81b lim: 5 exec/s: 31 rss: 72Mb L: 4/5 MS: 1 ShuffleBytes- 00:08:41.550 [2024-07-25 15:56:59.464058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.464080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.550 #32 NEW cov: 12202 ft: 15188 corp: 31/82b lim: 5 exec/s: 32 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:08:41.550 [2024-07-25 15:56:59.504662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.504684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.504739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.504749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.504809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.504821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.550 [2024-07-25 15:56:59.504877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.550 [2024-07-25 15:56:59.504888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.550 #33 NEW cov: 12202 ft: 15208 corp: 32/86b lim: 5 exec/s: 33 rss: 72Mb L: 4/5 MS: 1 ShuffleBytes- 00:08:41.809 [2024-07-25 15:56:59.544284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.809 [2024-07-25 15:56:59.544306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.809 #34 NEW cov: 12202 ft: 15251 corp: 33/87b lim: 5 exec/s: 34 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:08:41.809 [2024-07-25 15:56:59.594547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.809 [2024-07-25 15:56:59.594570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.809 [2024-07-25 15:56:59.594625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.809 [2024-07-25 15:56:59.594636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.810 #35 NEW cov: 12202 ft: 15264 corp: 34/89b lim: 5 exec/s: 35 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:08:41.810 [2024-07-25 15:56:59.644740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.644766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.810 [2024-07-25 15:56:59.644836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.644848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.810 #36 NEW cov: 12202 ft: 15284 corp: 35/91b lim: 5 exec/s: 36 rss: 72Mb L: 2/5 MS: 1 ChangeBinInt- 00:08:41.810 [2024-07-25 15:56:59.684855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.684877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.810 [2024-07-25 15:56:59.684948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.684959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.810 #37 NEW cov: 12202 ft: 15302 corp: 36/93b lim: 5 exec/s: 37 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:08:41.810 [2024-07-25 15:56:59.724796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.724818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.810 #38 NEW cov: 12202 ft: 15309 corp: 37/94b lim: 5 exec/s: 38 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:08:41.810 [2024-07-25 15:56:59.765244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.765266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.810 [2024-07-25 15:56:59.765339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.765351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.810 [2024-07-25 15:56:59.765405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.810 [2024-07-25 15:56:59.765416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.069 #39 NEW cov: 12202 ft: 15367 corp: 38/97b lim: 5 exec/s: 39 rss: 72Mb L: 3/5 MS: 1 ShuffleBytes- 00:08:42.069 [2024-07-25 15:56:59.815411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.815434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.069 [2024-07-25 15:56:59.815490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.815501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.069 [2024-07-25 15:56:59.815558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.815569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.069 #40 NEW cov: 12202 ft: 15372 corp: 39/100b lim: 5 exec/s: 40 rss: 72Mb L: 3/5 MS: 1 ShuffleBytes- 00:08:42.069 [2024-07-25 15:56:59.865555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.865578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.069 [2024-07-25 15:56:59.865650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.865662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.069 [2024-07-25 15:56:59.865716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.865727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.069 #41 NEW cov: 12202 ft: 15397 corp: 40/103b lim: 5 exec/s: 41 rss: 72Mb L: 3/5 MS: 1 ChangeByte- 00:08:42.069 [2024-07-25 15:56:59.905338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.905361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.069 #42 NEW cov: 12202 ft: 15417 corp: 41/104b lim: 5 exec/s: 42 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:08:42.069 [2024-07-25 15:56:59.955470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:56:59.955492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.069 #43 NEW cov: 12202 ft: 15454 corp: 42/105b lim: 5 exec/s: 43 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:08:42.069 [2024-07-25 15:57:00.005964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:57:00.005991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.069 [2024-07-25 15:57:00.006048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:57:00.006060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.069 [2024-07-25 15:57:00.006115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.069 [2024-07-25 15:57:00.006126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.069 #44 NEW cov: 12202 ft: 15475 corp: 43/108b lim: 5 exec/s: 22 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:08:42.069 #44 DONE cov: 12202 ft: 15475 corp: 43/108b lim: 5 exec/s: 22 rss: 72Mb 00:08:42.069 Done 44 runs in 2 second(s) 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:42.328 15:57:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:08:42.328 [2024-07-25 15:57:00.205315] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:42.328 [2024-07-25 15:57:00.205391] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168660 ] 00:08:42.328 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.588 [2024-07-25 15:57:00.394672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.588 [2024-07-25 15:57:00.463171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.588 [2024-07-25 15:57:00.521729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.588 [2024-07-25 15:57:00.537961] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:42.588 INFO: Running with entropic power schedule (0xFF, 100). 00:08:42.588 INFO: Seed: 3509686069 00:08:42.588 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:42.588 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:42.588 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:42.588 INFO: A corpus is not provided, starting from an empty corpus 00:08:42.846 [2024-07-25 15:57:00.593352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.593382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.846 #2 INITED cov: 11969 ft: 11968 corp: 1/1b exec/s: 0 rss: 70Mb 00:08:42.846 [2024-07-25 15:57:00.633480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.633503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.633569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.633584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.846 #3 NEW cov: 12088 ft: 13171 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:08:42.846 [2024-07-25 15:57:00.683623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.683645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.683696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.683706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.846 #4 NEW cov: 12094 ft: 13297 corp: 3/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:08:42.846 [2024-07-25 15:57:00.733745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.733772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.733822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.733833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.846 #5 NEW cov: 12179 ft: 13660 corp: 4/7b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:08:42.846 [2024-07-25 15:57:00.774046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.774069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.774121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.774132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.774182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.774191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.846 #6 NEW cov: 12179 ft: 13960 corp: 5/10b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 InsertByte- 00:08:42.846 [2024-07-25 15:57:00.814296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.814320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.814371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.814382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.814433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.814443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.846 [2024-07-25 15:57:00.814496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.846 [2024-07-25 15:57:00.814506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.106 #7 NEW cov: 12179 ft: 14267 corp: 6/14b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:08:43.106 [2024-07-25 15:57:00.864113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.864136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.106 [2024-07-25 15:57:00.864186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.864197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.106 #8 NEW cov: 12179 ft: 14319 corp: 7/16b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CrossOver- 00:08:43.106 [2024-07-25 15:57:00.914221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.914243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.106 [2024-07-25 15:57:00.914272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.914282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.106 #9 NEW cov: 12179 ft: 14359 corp: 8/18b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:43.106 [2024-07-25 15:57:00.954317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.954340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.106 [2024-07-25 15:57:00.954389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.954401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.106 #10 NEW cov: 12179 ft: 14432 corp: 9/20b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:08:43.106 [2024-07-25 15:57:00.994436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.994458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.106 [2024-07-25 15:57:00.994527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:00.994539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.106 #11 NEW cov: 12179 ft: 14499 corp: 10/22b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ChangeBit- 00:08:43.106 [2024-07-25 15:57:01.044714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:01.044737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.106 [2024-07-25 15:57:01.044790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:01.044804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.106 [2024-07-25 15:57:01.044852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:01.044862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.106 #12 NEW cov: 12179 ft: 14522 corp: 11/25b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 CrossOver- 00:08:43.106 [2024-07-25 15:57:01.084697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:01.084719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.106 [2024-07-25 15:57:01.084772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.106 [2024-07-25 15:57:01.084784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.366 #13 NEW cov: 12179 ft: 14611 corp: 12/27b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ChangeBinInt- 00:08:43.366 [2024-07-25 15:57:01.124997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.125019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.125070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.125080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.125131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.125141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.366 #14 NEW cov: 12179 ft: 14655 corp: 13/30b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 CrossOver- 00:08:43.366 [2024-07-25 15:57:01.174938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.174961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.175012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.175023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.366 #15 NEW cov: 12179 ft: 14673 corp: 14/32b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 CrossOver- 00:08:43.366 [2024-07-25 15:57:01.225437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.225459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.225525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.225538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.225589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.225599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.225649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.225659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.366 #16 NEW cov: 12179 ft: 14682 corp: 15/36b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeByte- 00:08:43.366 [2024-07-25 15:57:01.275442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.275464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.275517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.366 [2024-07-25 15:57:01.275528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.366 [2024-07-25 15:57:01.275578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.367 [2024-07-25 15:57:01.275589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.367 #17 NEW cov: 12179 ft: 14685 corp: 16/39b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 InsertByte- 00:08:43.367 [2024-07-25 15:57:01.325573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.367 [2024-07-25 15:57:01.325597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.367 [2024-07-25 15:57:01.325663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.367 [2024-07-25 15:57:01.325674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.367 [2024-07-25 15:57:01.325724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.367 [2024-07-25 15:57:01.325735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.626 #18 NEW cov: 12179 ft: 14721 corp: 17/42b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 CMP- DE: "\000\006"- 00:08:43.626 [2024-07-25 15:57:01.376004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.376027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.626 [2024-07-25 15:57:01.376095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.376106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.626 [2024-07-25 15:57:01.376154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.376168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.626 [2024-07-25 15:57:01.376219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.376230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.626 [2024-07-25 15:57:01.376279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.376290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.626 #19 NEW cov: 12179 ft: 14818 corp: 18/47b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:08:43.626 [2024-07-25 15:57:01.415846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.415869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.626 [2024-07-25 15:57:01.415921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.415932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.626 [2024-07-25 15:57:01.415983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.415994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.626 #20 NEW cov: 12179 ft: 14831 corp: 19/50b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:08:43.626 [2024-07-25 15:57:01.455788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.455812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.626 [2024-07-25 15:57:01.455864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.626 [2024-07-25 15:57:01.455876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.626 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:43.626 #21 NEW cov: 12202 ft: 14900 corp: 20/52b lim: 5 exec/s: 21 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:08:43.886 [2024-07-25 15:57:01.616744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.616805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.616881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.616902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.616974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.616993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.886 #22 NEW cov: 12202 ft: 14992 corp: 21/55b lim: 5 exec/s: 22 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:08:43.886 [2024-07-25 15:57:01.666442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.666466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.666519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.666530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.666582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.666593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.886 #23 NEW cov: 12202 ft: 15026 corp: 22/58b lim: 5 exec/s: 23 rss: 73Mb L: 3/5 MS: 1 PersAutoDict- DE: "\000\006"- 00:08:43.886 [2024-07-25 15:57:01.716292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.716316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.886 #24 NEW cov: 12202 ft: 15036 corp: 23/59b lim: 5 exec/s: 24 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:08:43.886 [2024-07-25 15:57:01.756560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.756583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.756652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.756663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.886 #25 NEW cov: 12202 ft: 15053 corp: 24/61b lim: 5 exec/s: 25 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:08:43.886 [2024-07-25 15:57:01.807157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.807180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.807232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.807243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.807292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.807303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.807353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.807364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.807416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.807430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.886 #26 NEW cov: 12202 ft: 15064 corp: 25/66b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:08:43.886 [2024-07-25 15:57:01.866866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.866888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.886 [2024-07-25 15:57:01.866957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.886 [2024-07-25 15:57:01.866969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.146 #27 NEW cov: 12202 ft: 15088 corp: 26/68b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:08:44.146 [2024-07-25 15:57:01.907115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.907137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:01.907204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.907215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:01.907266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.907276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.146 #28 NEW cov: 12202 ft: 15186 corp: 27/71b lim: 5 exec/s: 28 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:08:44.146 [2024-07-25 15:57:01.947084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.947107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:01.947174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.947185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:01.997359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.997381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:01.997431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.997441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:01.997492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:01.997503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.146 #30 NEW cov: 12202 ft: 15225 corp: 28/74b lim: 5 exec/s: 30 rss: 73Mb L: 3/5 MS: 2 ChangeBit-InsertByte- 00:08:44.146 [2024-07-25 15:57:02.037536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:02.037558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:02.037610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:02.037621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:02.037672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:02.037683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.146 #31 NEW cov: 12202 ft: 15243 corp: 29/77b lim: 5 exec/s: 31 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:08:44.146 [2024-07-25 15:57:02.087525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:02.087548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.146 [2024-07-25 15:57:02.087599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.146 [2024-07-25 15:57:02.087610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.146 #32 NEW cov: 12202 ft: 15252 corp: 30/79b lim: 5 exec/s: 32 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:08:44.406 [2024-07-25 15:57:02.137834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.137857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.137908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.137919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.137972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.137983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.406 #33 NEW cov: 12202 ft: 15268 corp: 31/82b lim: 5 exec/s: 33 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:08:44.406 [2024-07-25 15:57:02.187779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.187817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.187869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.187881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.406 #34 NEW cov: 12202 ft: 15279 corp: 32/84b lim: 5 exec/s: 34 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:08:44.406 [2024-07-25 15:57:02.238072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.238096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.238164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.238175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.238228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.238238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.406 #35 NEW cov: 12202 ft: 15295 corp: 33/87b lim: 5 exec/s: 35 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:08:44.406 [2024-07-25 15:57:02.288101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.288123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.288176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.288186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.406 #36 NEW cov: 12202 ft: 15309 corp: 34/89b lim: 5 exec/s: 36 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:08:44.406 [2024-07-25 15:57:02.338513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.338536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.338588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.338599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.338651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.338662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.338713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.338724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.406 #37 NEW cov: 12202 ft: 15319 corp: 35/93b lim: 5 exec/s: 37 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:08:44.406 [2024-07-25 15:57:02.388329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.388369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.406 [2024-07-25 15:57:02.388421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.406 [2024-07-25 15:57:02.388432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.665 #38 NEW cov: 12202 ft: 15334 corp: 36/95b lim: 5 exec/s: 38 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:08:44.665 [2024-07-25 15:57:02.428730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.428752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.428808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.428819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.428871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.428881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.428933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.428943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.665 #39 NEW cov: 12202 ft: 15349 corp: 37/99b lim: 5 exec/s: 39 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:08:44.665 [2024-07-25 15:57:02.468858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.468881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.468959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.468970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.469020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.469030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.469081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.469091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.665 #40 NEW cov: 12202 ft: 15356 corp: 38/103b lim: 5 exec/s: 40 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:08:44.665 [2024-07-25 15:57:02.519034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.519055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.519121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.519131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.519183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.519194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.519247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.519258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.665 #41 NEW cov: 12202 ft: 15360 corp: 39/107b lim: 5 exec/s: 41 rss: 74Mb L: 4/5 MS: 1 ChangeBit- 00:08:44.665 [2024-07-25 15:57:02.568988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.569011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.569065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.569076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.665 [2024-07-25 15:57:02.569127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.665 [2024-07-25 15:57:02.569137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.665 #42 NEW cov: 12202 ft: 15386 corp: 40/110b lim: 5 exec/s: 21 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:08:44.665 #42 DONE cov: 12202 ft: 15386 corp: 40/110b lim: 5 exec/s: 21 rss: 74Mb 00:08:44.665 ###### Recommended dictionary. ###### 00:08:44.665 "\000\006" # Uses: 1 00:08:44.665 ###### End of recommended dictionary. ###### 00:08:44.665 Done 42 runs in 2 second(s) 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:44.925 15:57:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:08:44.925 [2024-07-25 15:57:02.746269] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:44.925 [2024-07-25 15:57:02.746331] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169145 ] 00:08:44.925 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.184 [2024-07-25 15:57:02.928786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.184 [2024-07-25 15:57:02.994971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.184 [2024-07-25 15:57:03.053657] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.184 [2024-07-25 15:57:03.069907] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:45.184 INFO: Running with entropic power schedule (0xFF, 100). 00:08:45.184 INFO: Seed: 1744696403 00:08:45.184 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:45.184 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:45.184 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:45.184 INFO: A corpus is not provided, starting from an empty corpus 00:08:45.184 #2 INITED exec/s: 0 rss: 63Mb 00:08:45.184 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:45.184 This may also happen if the target rejected all inputs we tried so far 00:08:45.184 [2024-07-25 15:57:03.131149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.184 [2024-07-25 15:57:03.131182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.184 [2024-07-25 15:57:03.131288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.184 [2024-07-25 15:57:03.131300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.184 [2024-07-25 15:57:03.131389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.184 [2024-07-25 15:57:03.131403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.184 [2024-07-25 15:57:03.131504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.184 [2024-07-25 15:57:03.131516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.185 [2024-07-25 15:57:03.131604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.185 [2024-07-25 15:57:03.131617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:45.443 NEW_FUNC[1/700]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:45.443 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:45.443 #5 NEW cov: 11994 ft: 11989 corp: 2/41b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:45.443 [2024-07-25 15:57:03.290702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.290735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.443 [2024-07-25 15:57:03.290854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.290869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.443 #8 NEW cov: 12111 ft: 13041 corp: 3/62b lim: 40 exec/s: 0 rss: 71Mb L: 21/40 MS: 3 CopyPart-CrossOver-InsertRepeatedBytes- 00:08:45.443 [2024-07-25 15:57:03.341027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.341053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.443 [2024-07-25 15:57:03.341172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.341190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.443 [2024-07-25 15:57:03.341293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:1515bebe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.341307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.443 #11 NEW cov: 12117 ft: 13564 corp: 4/88b lim: 40 exec/s: 0 rss: 71Mb L: 26/40 MS: 3 InsertRepeatedBytes-ShuffleBytes-InsertRepeatedBytes- 00:08:45.443 [2024-07-25 15:57:03.391908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.391934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.443 [2024-07-25 15:57:03.392040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000f700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.392054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.443 [2024-07-25 15:57:03.392160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.392172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.443 [2024-07-25 15:57:03.392284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.392297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.443 [2024-07-25 15:57:03.392396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.443 [2024-07-25 15:57:03.392409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:45.443 #12 NEW cov: 12202 ft: 13816 corp: 5/128b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:45.701 [2024-07-25 15:57:03.461502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:159b1515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.701 [2024-07-25 15:57:03.461527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.701 [2024-07-25 15:57:03.461637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.461651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.461764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:1515bebe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.461778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.702 #13 NEW cov: 12202 ft: 13914 corp: 6/154b lim: 40 exec/s: 0 rss: 72Mb L: 26/40 MS: 1 ChangeByte- 00:08:45.702 [2024-07-25 15:57:03.521738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.521764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.521876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.521890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.521989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:6c6c6c6c cdw11:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.522001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.702 #14 NEW cov: 12202 ft: 14063 corp: 7/185b lim: 40 exec/s: 0 rss: 72Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:08:45.702 [2024-07-25 15:57:03.592100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.592122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.592235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.592252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.592377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:1515bebe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.592392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.702 #15 NEW cov: 12202 ft: 14143 corp: 8/211b lim: 40 exec/s: 0 rss: 72Mb L: 26/40 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:45.702 [2024-07-25 15:57:03.662617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.662639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.662753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.662770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.662879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ad000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.662892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.702 [2024-07-25 15:57:03.663001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.702 [2024-07-25 15:57:03.663016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.702 #16 NEW cov: 12202 ft: 14182 corp: 9/243b lim: 40 exec/s: 0 rss: 72Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:08:45.961 [2024-07-25 15:57:03.713352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.713377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.713491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000f700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.713505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.713612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.713624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.713734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.713747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.713857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.713872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:45.961 #17 NEW cov: 12202 ft: 14208 corp: 10/283b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:08:45.961 [2024-07-25 15:57:03.783238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.783259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.783369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.783382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.783491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ad000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.783506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.783614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.783625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.961 #18 NEW cov: 12202 ft: 14284 corp: 11/315b lim: 40 exec/s: 0 rss: 72Mb L: 32/40 MS: 1 ShuffleBytes- 00:08:45.961 [2024-07-25 15:57:03.852879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.852903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.853014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adad0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.853032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.961 #19 NEW cov: 12202 ft: 14318 corp: 12/337b lim: 40 exec/s: 0 rss: 72Mb L: 22/40 MS: 1 EraseBytes- 00:08:45.961 [2024-07-25 15:57:03.904015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.904040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.904138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000f700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.904152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.904247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.904261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.904365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.904379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.961 [2024-07-25 15:57:03.904477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.961 [2024-07-25 15:57:03.904492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:45.961 #20 NEW cov: 12202 ft: 14371 corp: 13/377b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:46.220 [2024-07-25 15:57:03.953994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.220 [2024-07-25 15:57:03.954020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.220 [2024-07-25 15:57:03.954127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adad0000 cdw11:00001515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.220 [2024-07-25 15:57:03.954141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.220 [2024-07-25 15:57:03.954233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15159b15 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.220 [2024-07-25 15:57:03.954246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.220 [2024-07-25 15:57:03.954343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:15151500 cdw11:000000ad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.220 [2024-07-25 15:57:03.954356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:46.220 #21 NEW cov: 12202 ft: 14412 corp: 14/412b lim: 40 exec/s: 0 rss: 72Mb L: 35/40 MS: 1 CrossOver- 00:08:46.220 [2024-07-25 15:57:04.023458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a1aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.220 [2024-07-25 15:57:04.023482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.220 [2024-07-25 15:57:04.023596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.220 [2024-07-25 15:57:04.023613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.221 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:46.221 #22 NEW cov: 12225 ft: 14444 corp: 15/433b lim: 40 exec/s: 0 rss: 72Mb L: 21/40 MS: 1 ChangeBit- 00:08:46.221 [2024-07-25 15:57:04.074632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.074657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.221 [2024-07-25 15:57:04.074773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0030f700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.074788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.221 [2024-07-25 15:57:04.074897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.074912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.221 [2024-07-25 15:57:04.075020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.075034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:46.221 [2024-07-25 15:57:04.075141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.075155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:46.221 #23 NEW cov: 12225 ft: 14453 corp: 16/473b lim: 40 exec/s: 23 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:08:46.221 [2024-07-25 15:57:04.144569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.144594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.221 [2024-07-25 15:57:04.144710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:15151510 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.144728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.221 [2024-07-25 15:57:04.144854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:1515bebe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.221 [2024-07-25 15:57:04.144870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.221 #24 NEW cov: 12225 ft: 14490 corp: 17/499b lim: 40 exec/s: 24 rss: 72Mb L: 26/40 MS: 1 ChangeBinInt- 00:08:46.480 [2024-07-25 15:57:04.214525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.214551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.214662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.214676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.480 #25 NEW cov: 12225 ft: 14502 corp: 18/522b lim: 40 exec/s: 25 rss: 72Mb L: 23/40 MS: 1 CopyPart- 00:08:46.480 [2024-07-25 15:57:04.264840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.264866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.264984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.264999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.480 #30 NEW cov: 12225 ft: 14517 corp: 19/540b lim: 40 exec/s: 30 rss: 72Mb L: 18/40 MS: 5 ChangeByte-CopyPart-ChangeByte-EraseBytes-CrossOver- 00:08:46.480 [2024-07-25 15:57:04.315691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.315714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.315819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.315832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.315935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:1515bebe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.315947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.480 #31 NEW cov: 12225 ft: 14568 corp: 20/566b lim: 40 exec/s: 31 rss: 72Mb L: 26/40 MS: 1 CopyPart- 00:08:46.480 [2024-07-25 15:57:04.366541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.366565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.366675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.366688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.366788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.366800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.366903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.366916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.367025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.367038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:46.480 #32 NEW cov: 12225 ft: 14588 corp: 21/606b lim: 40 exec/s: 32 rss: 72Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:46.480 [2024-07-25 15:57:04.416638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.416668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.416781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:02000000 cdw11:0000f700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.416799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.416912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.416926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.417044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.417057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:46.480 [2024-07-25 15:57:04.417161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.480 [2024-07-25 15:57:04.417175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:46.480 #33 NEW cov: 12225 ft: 14614 corp: 22/646b lim: 40 exec/s: 33 rss: 72Mb L: 40/40 MS: 1 ChangeBit- 00:08:46.739 [2024-07-25 15:57:04.486076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:159b1515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.486100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.486206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15bebebe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.486220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.739 #34 NEW cov: 12225 ft: 14672 corp: 23/663b lim: 40 exec/s: 34 rss: 72Mb L: 17/40 MS: 1 EraseBytes- 00:08:46.739 [2024-07-25 15:57:04.536901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aad20 cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.536924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.537029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.537042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.537156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ad000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.537170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.537275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.537289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:46.739 #35 NEW cov: 12225 ft: 14723 corp: 24/695b lim: 40 exec/s: 35 rss: 72Mb L: 32/40 MS: 1 ChangeBinInt- 00:08:46.739 [2024-07-25 15:57:04.586482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.586504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.586613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.586625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.739 #36 NEW cov: 12225 ft: 14739 corp: 25/718b lim: 40 exec/s: 36 rss: 72Mb L: 23/40 MS: 1 InsertRepeatedBytes- 00:08:46.739 [2024-07-25 15:57:04.646725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.646747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.646862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.646879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.739 #37 NEW cov: 12225 ft: 14749 corp: 26/741b lim: 40 exec/s: 37 rss: 73Mb L: 23/40 MS: 1 CrossOver- 00:08:46.739 [2024-07-25 15:57:04.717098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.717121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.717218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.717234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.739 [2024-07-25 15:57:04.717352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:6c6c6c6c cdw11:6c6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.739 [2024-07-25 15:57:04.717369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.998 #38 NEW cov: 12225 ft: 14777 corp: 27/772b lim: 40 exec/s: 38 rss: 73Mb L: 31/40 MS: 1 ChangeByte- 00:08:46.998 [2024-07-25 15:57:04.787225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.787249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.787350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.787365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.998 #39 NEW cov: 12225 ft: 14823 corp: 28/795b lim: 40 exec/s: 39 rss: 73Mb L: 23/40 MS: 1 ShuffleBytes- 00:08:46.998 [2024-07-25 15:57:04.838382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.838409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.838527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.838544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.838646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:6c6c6c6c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.838659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.838765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff6c6c6c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.838779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.838894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:6c6c6cad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.838908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:46.998 #40 NEW cov: 12225 ft: 14826 corp: 29/835b lim: 40 exec/s: 40 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:46.998 [2024-07-25 15:57:04.888551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00002900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.888575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.888681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:02000000 cdw11:0000f700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.888694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.888797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.888811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.888924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.888939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.889050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.889063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:46.998 #41 NEW cov: 12225 ft: 14839 corp: 30/875b lim: 40 exec/s: 41 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:08:46.998 [2024-07-25 15:57:04.958036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a1aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.958060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.998 [2024-07-25 15:57:04.958168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.998 [2024-07-25 15:57:04.958182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.257 #42 NEW cov: 12225 ft: 14885 corp: 31/895b lim: 40 exec/s: 42 rss: 73Mb L: 20/40 MS: 1 EraseBytes- 00:08:47.257 [2024-07-25 15:57:05.018611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0aadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.257 [2024-07-25 15:57:05.018636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.257 [2024-07-25 15:57:05.018743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adadadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.257 [2024-07-25 15:57:05.018763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.257 [2024-07-25 15:57:05.018888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:ad3eadad SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.257 [2024-07-25 15:57:05.018904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.257 #43 NEW cov: 12225 ft: 14891 corp: 32/919b lim: 40 exec/s: 43 rss: 73Mb L: 24/40 MS: 1 InsertByte- 00:08:47.257 [2024-07-25 15:57:05.088839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:159b1515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.257 [2024-07-25 15:57:05.088861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.257 [2024-07-25 15:57:05.088965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:159b1515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.257 [2024-07-25 15:57:05.088977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.257 [2024-07-25 15:57:05.089084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:1515bebe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.257 [2024-07-25 15:57:05.089097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.257 #44 NEW cov: 12225 ft: 14899 corp: 33/945b lim: 40 exec/s: 22 rss: 73Mb L: 26/40 MS: 1 CopyPart- 00:08:47.257 #44 DONE cov: 12225 ft: 14899 corp: 33/945b lim: 40 exec/s: 22 rss: 73Mb 00:08:47.257 ###### Recommended dictionary. ###### 00:08:47.257 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:47.257 ###### End of recommended dictionary. ###### 00:08:47.257 Done 44 runs in 2 second(s) 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:47.257 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:47.258 15:57:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:47.516 [2024-07-25 15:57:05.268122] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:47.516 [2024-07-25 15:57:05.268203] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169570 ] 00:08:47.516 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.516 [2024-07-25 15:57:05.445004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.775 [2024-07-25 15:57:05.511885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.775 [2024-07-25 15:57:05.570651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.775 [2024-07-25 15:57:05.586905] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:47.775 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.775 INFO: Seed: 4261731297 00:08:47.775 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:47.775 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:47.775 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:47.775 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.775 #2 INITED exec/s: 0 rss: 63Mb 00:08:47.775 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.775 This may also happen if the target rejected all inputs we tried so far 00:08:47.775 [2024-07-25 15:57:05.647938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.775 [2024-07-25 15:57:05.647982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.775 [2024-07-25 15:57:05.648079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.775 [2024-07-25 15:57:05.648093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.775 [2024-07-25 15:57:05.648183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.775 [2024-07-25 15:57:05.648196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.775 [2024-07-25 15:57:05.648293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:47.775 [2024-07-25 15:57:05.648307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.034 NEW_FUNC[1/701]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:48.034 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:48.034 #4 NEW cov: 12010 ft: 12011 corp: 2/37b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:48.034 [2024-07-25 15:57:05.818639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.818687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.818819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.818838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.818965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.818982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.819104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.819120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.034 #7 NEW cov: 12123 ft: 12667 corp: 3/72b lim: 40 exec/s: 0 rss: 71Mb L: 35/36 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:48.034 [2024-07-25 15:57:05.868953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.868979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.869109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.869125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.869231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.869246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.869354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.869371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.034 #8 NEW cov: 12129 ft: 12865 corp: 4/107b lim: 40 exec/s: 0 rss: 72Mb L: 35/36 MS: 1 CopyPart- 00:08:48.034 [2024-07-25 15:57:05.939154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.939179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.939287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.939302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.939412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.939425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:05.939542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:05.939555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.034 #14 NEW cov: 12214 ft: 13198 corp: 5/140b lim: 40 exec/s: 0 rss: 72Mb L: 33/36 MS: 1 EraseBytes- 00:08:48.034 [2024-07-25 15:57:06.009636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:06.009658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:06.009787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:763a7676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:06.009800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:06.009911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:06.009926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.034 [2024-07-25 15:57:06.010034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.034 [2024-07-25 15:57:06.010047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.294 #15 NEW cov: 12214 ft: 13247 corp: 6/175b lim: 40 exec/s: 0 rss: 72Mb L: 35/36 MS: 1 ChangeByte- 00:08:48.294 [2024-07-25 15:57:06.059108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.059133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.294 [2024-07-25 15:57:06.059261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.059275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.294 #21 NEW cov: 12214 ft: 13646 corp: 7/194b lim: 40 exec/s: 0 rss: 72Mb L: 19/36 MS: 1 CrossOver- 00:08:48.294 [2024-07-25 15:57:06.109719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.109743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.294 [2024-07-25 15:57:06.109874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.109891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.294 #22 NEW cov: 12214 ft: 13717 corp: 8/213b lim: 40 exec/s: 0 rss: 72Mb L: 19/36 MS: 1 ShuffleBytes- 00:08:48.294 [2024-07-25 15:57:06.181005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:767676f4 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.181029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.294 [2024-07-25 15:57:06.181131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.181147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.294 [2024-07-25 15:57:06.181265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.181282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.294 [2024-07-25 15:57:06.181389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.181405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.294 #23 NEW cov: 12214 ft: 13759 corp: 9/249b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 InsertByte- 00:08:48.294 [2024-07-25 15:57:06.230406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76769976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.230430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.294 [2024-07-25 15:57:06.230541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.294 [2024-07-25 15:57:06.230556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.294 #24 NEW cov: 12214 ft: 13766 corp: 10/269b lim: 40 exec/s: 0 rss: 72Mb L: 20/36 MS: 1 InsertByte- 00:08:48.553 [2024-07-25 15:57:06.291351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.291374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.291490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.291504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.291611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.291625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.554 #25 NEW cov: 12214 ft: 14024 corp: 11/297b lim: 40 exec/s: 0 rss: 72Mb L: 28/36 MS: 1 CrossOver- 00:08:48.554 [2024-07-25 15:57:06.342083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:766b7676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.342106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.342213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.342228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.342345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.342359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.342472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.342485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.554 #26 NEW cov: 12214 ft: 14064 corp: 12/331b lim: 40 exec/s: 0 rss: 72Mb L: 34/36 MS: 1 InsertByte- 00:08:48.554 [2024-07-25 15:57:06.411915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.411939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.412054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.412070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.412182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.412196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.554 #27 NEW cov: 12214 ft: 14082 corp: 13/361b lim: 40 exec/s: 0 rss: 72Mb L: 30/36 MS: 1 EraseBytes- 00:08:48.554 [2024-07-25 15:57:06.461689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.461713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.461822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:7676768a cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.461837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 #28 NEW cov: 12214 ft: 14104 corp: 14/380b lim: 40 exec/s: 0 rss: 72Mb L: 19/36 MS: 1 ChangeBinInt- 00:08:48.554 [2024-07-25 15:57:06.512711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:766b7676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.512736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.512840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.512854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.512964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2b767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.512980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.554 [2024-07-25 15:57:06.513083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.554 [2024-07-25 15:57:06.513100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.812 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:48.812 #29 NEW cov: 12237 ft: 14182 corp: 15/415b lim: 40 exec/s: 0 rss: 72Mb L: 35/36 MS: 1 InsertByte- 00:08:48.812 [2024-07-25 15:57:06.582233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-25 15:57:06.582258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.812 [2024-07-25 15:57:06.582360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767e8a cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.812 [2024-07-25 15:57:06.582375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.812 #30 NEW cov: 12237 ft: 14214 corp: 16/434b lim: 40 exec/s: 0 rss: 72Mb L: 19/36 MS: 1 ChangeBit- 00:08:48.812 [2024-07-25 15:57:06.653295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.653320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.813 [2024-07-25 15:57:06.653437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:764a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.653452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.813 [2024-07-25 15:57:06.653557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:4a4a4a4a cdw11:4a4a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.653571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.813 [2024-07-25 15:57:06.653678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:4a4a4a4a cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.653691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.813 #31 NEW cov: 12237 ft: 14226 corp: 17/468b lim: 40 exec/s: 31 rss: 72Mb L: 34/36 MS: 1 InsertRepeatedBytes- 00:08:48.813 [2024-07-25 15:57:06.703498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.703521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.813 [2024-07-25 15:57:06.703634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:764a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.703650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.813 [2024-07-25 15:57:06.703765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:4a4a4a4a cdw11:4a4a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.703780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.813 [2024-07-25 15:57:06.703909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:4a4a4a4a cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.703924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.813 #32 NEW cov: 12237 ft: 14259 corp: 18/502b lim: 40 exec/s: 32 rss: 72Mb L: 34/36 MS: 1 CopyPart- 00:08:48.813 [2024-07-25 15:57:06.772951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.772977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.813 [2024-07-25 15:57:06.773087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.813 [2024-07-25 15:57:06.773102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.813 #33 NEW cov: 12237 ft: 14283 corp: 19/521b lim: 40 exec/s: 33 rss: 72Mb L: 19/36 MS: 1 ShuffleBytes- 00:08:49.071 [2024-07-25 15:57:06.823995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:766b7676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.824023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:06.824138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.824152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:06.824261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2b767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.824275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:06.824377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.824394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.071 #34 NEW cov: 12237 ft: 14298 corp: 20/556b lim: 40 exec/s: 34 rss: 72Mb L: 35/36 MS: 1 ShuffleBytes- 00:08:49.071 [2024-07-25 15:57:06.893765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.893790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:06.893909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.893924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.071 #35 NEW cov: 12237 ft: 14352 corp: 21/575b lim: 40 exec/s: 35 rss: 72Mb L: 19/36 MS: 1 ChangeBit- 00:08:49.071 [2024-07-25 15:57:06.944756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.944784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:06.944901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:764a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.944917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:06.945031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:4a4a4a4a cdw11:4a4a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.945045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:06.945152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:4a4a4a4a cdw11:767676a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:06.945165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.071 #36 NEW cov: 12237 ft: 14373 corp: 22/609b lim: 40 exec/s: 36 rss: 72Mb L: 34/36 MS: 1 ChangeByte- 00:08:49.071 [2024-07-25 15:57:07.014962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.071 [2024-07-25 15:57:07.014987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.071 [2024-07-25 15:57:07.015105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:764a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.072 [2024-07-25 15:57:07.015122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.072 [2024-07-25 15:57:07.015249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:4a4a4a4a cdw11:4a4a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.072 [2024-07-25 15:57:07.015265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.072 [2024-07-25 15:57:07.015376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:4a4a4a4a cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.072 [2024-07-25 15:57:07.015390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.072 #37 NEW cov: 12237 ft: 14379 corp: 23/643b lim: 40 exec/s: 37 rss: 72Mb L: 34/36 MS: 1 ShuffleBytes- 00:08:49.330 [2024-07-25 15:57:07.065373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:766b7676 cdw11:7676767a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.065400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.065508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.065524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.065632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:762b7676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.065647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.065757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.065777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.330 #38 NEW cov: 12237 ft: 14392 corp: 24/679b lim: 40 exec/s: 38 rss: 72Mb L: 36/36 MS: 1 InsertByte- 00:08:49.330 [2024-07-25 15:57:07.114903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.114930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.115052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.115067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.115176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767476 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.115190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.330 #39 NEW cov: 12237 ft: 14401 corp: 25/707b lim: 40 exec/s: 39 rss: 72Mb L: 28/36 MS: 1 ChangeBit- 00:08:49.330 [2024-07-25 15:57:07.184944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.184969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.185084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:767676fc cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.185103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.330 #40 NEW cov: 12237 ft: 14416 corp: 26/726b lim: 40 exec/s: 40 rss: 72Mb L: 19/36 MS: 1 ChangeByte- 00:08:49.330 [2024-07-25 15:57:07.256212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.256235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.256350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.256366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.256475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.256489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.256598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.256614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.330 #41 NEW cov: 12237 ft: 14447 corp: 27/759b lim: 40 exec/s: 41 rss: 72Mb L: 33/36 MS: 1 ShuffleBytes- 00:08:49.330 [2024-07-25 15:57:07.306900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:766b7676 cdw11:7676767a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.306923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.307033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.307059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.307160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:762b7676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.307172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.307298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.307311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.330 [2024-07-25 15:57:07.307418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676768d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.330 [2024-07-25 15:57:07.307433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:49.588 #42 NEW cov: 12237 ft: 14499 corp: 28/799b lim: 40 exec/s: 42 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:49.588 [2024-07-25 15:57:07.377101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.377125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.377241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:764a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.377258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.377371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:4a4a4a4a cdw11:4a4a4a4a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.377385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.377496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:4a4a4a4a cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.377511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.589 #43 NEW cov: 12237 ft: 14566 corp: 29/833b lim: 40 exec/s: 43 rss: 73Mb L: 34/40 MS: 1 ShuffleBytes- 00:08:49.589 [2024-07-25 15:57:07.427295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:766b7676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.427320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.427430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.427444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.427556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2b767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.427569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.427680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.427692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.589 #44 NEW cov: 12237 ft: 14577 corp: 30/868b lim: 40 exec/s: 44 rss: 73Mb L: 35/40 MS: 1 ShuffleBytes- 00:08:49.589 [2024-07-25 15:57:07.477568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:766b7676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.477592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.477710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.477724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.477838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2b767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.477853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.477969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.477984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.589 #45 NEW cov: 12237 ft: 14609 corp: 31/903b lim: 40 exec/s: 45 rss: 73Mb L: 35/40 MS: 1 ShuffleBytes- 00:08:49.589 [2024-07-25 15:57:07.527128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.527155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.589 [2024-07-25 15:57:07.527278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:767676fc cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.589 [2024-07-25 15:57:07.527294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.589 #46 NEW cov: 12237 ft: 14660 corp: 32/922b lim: 40 exec/s: 46 rss: 73Mb L: 19/40 MS: 1 CopyPart- 00:08:49.848 [2024-07-25 15:57:07.587526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.848 [2024-07-25 15:57:07.587550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.848 [2024-07-25 15:57:07.587667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:7676768a cdw11:89898989 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.848 [2024-07-25 15:57:07.587680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.848 #47 NEW cov: 12237 ft: 14663 corp: 33/941b lim: 40 exec/s: 47 rss: 73Mb L: 19/40 MS: 1 ShuffleBytes- 00:08:49.848 [2024-07-25 15:57:07.638678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.848 [2024-07-25 15:57:07.638701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.848 [2024-07-25 15:57:07.638823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.848 [2024-07-25 15:57:07.638837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.848 [2024-07-25 15:57:07.638956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.848 [2024-07-25 15:57:07.638970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.848 [2024-07-25 15:57:07.639084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767651 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.848 [2024-07-25 15:57:07.639098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.848 #48 NEW cov: 12237 ft: 14678 corp: 34/977b lim: 40 exec/s: 24 rss: 73Mb L: 36/40 MS: 1 InsertByte- 00:08:49.848 #48 DONE cov: 12237 ft: 14678 corp: 34/977b lim: 40 exec/s: 24 rss: 73Mb 00:08:49.848 Done 48 runs in 2 second(s) 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:49.848 15:57:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:49.848 [2024-07-25 15:57:07.817451] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:49.848 [2024-07-25 15:57:07.817511] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170005 ] 00:08:50.106 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.106 [2024-07-25 15:57:07.998212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.106 [2024-07-25 15:57:08.062385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.364 [2024-07-25 15:57:08.121369] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.364 [2024-07-25 15:57:08.137594] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:50.364 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.364 INFO: Seed: 2517729041 00:08:50.364 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:50.364 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:50.364 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:50.364 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.364 #2 INITED exec/s: 0 rss: 63Mb 00:08:50.364 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.364 This may also happen if the target rejected all inputs we tried so far 00:08:50.364 [2024-07-25 15:57:08.197908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.364 [2024-07-25 15:57:08.197941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.364 NEW_FUNC[1/701]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:50.364 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:50.364 #9 NEW cov: 12008 ft: 12009 corp: 2/15b lim: 40 exec/s: 0 rss: 70Mb L: 14/14 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:50.623 [2024-07-25 15:57:08.368432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.623 [2024-07-25 15:57:08.368470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.623 #10 NEW cov: 12121 ft: 12748 corp: 3/29b lim: 40 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 ChangeBinInt- 00:08:50.623 [2024-07-25 15:57:08.438544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.623 [2024-07-25 15:57:08.438571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.623 #12 NEW cov: 12127 ft: 12915 corp: 4/44b lim: 40 exec/s: 0 rss: 71Mb L: 15/15 MS: 2 CrossOver-CrossOver- 00:08:50.623 [2024-07-25 15:57:08.489145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.623 [2024-07-25 15:57:08.489168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.623 [2024-07-25 15:57:08.489277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0a2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.623 [2024-07-25 15:57:08.489290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.623 #15 NEW cov: 12212 ft: 13831 corp: 5/60b lim: 40 exec/s: 0 rss: 71Mb L: 16/16 MS: 3 ChangeBit-InsertByte-CrossOver- 00:08:50.623 [2024-07-25 15:57:08.538946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.623 [2024-07-25 15:57:08.538968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.623 #16 NEW cov: 12212 ft: 13888 corp: 6/74b lim: 40 exec/s: 0 rss: 71Mb L: 14/16 MS: 1 ChangeBit- 00:08:50.623 [2024-07-25 15:57:08.589544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4e0a4b4b cdw11:4b4b4b4b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.623 [2024-07-25 15:57:08.589565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.623 [2024-07-25 15:57:08.589668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.623 [2024-07-25 15:57:08.589681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.623 #19 NEW cov: 12212 ft: 13987 corp: 7/90b lim: 40 exec/s: 0 rss: 71Mb L: 16/16 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:50.882 [2024-07-25 15:57:08.639796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00005500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.639820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.882 [2024-07-25 15:57:08.639924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.639949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.882 #20 NEW cov: 12212 ft: 14029 corp: 8/108b lim: 40 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 CMP- DE: "U\000\000\000"- 00:08:50.882 [2024-07-25 15:57:08.709580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.709602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.882 #21 NEW cov: 12212 ft: 14121 corp: 9/123b lim: 40 exec/s: 0 rss: 71Mb L: 15/18 MS: 1 InsertByte- 00:08:50.882 [2024-07-25 15:57:08.760193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00005500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.760215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.882 [2024-07-25 15:57:08.760324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.760339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.882 #22 NEW cov: 12212 ft: 14139 corp: 10/141b lim: 40 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 CopyPart- 00:08:50.882 [2024-07-25 15:57:08.820060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.820083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.882 #23 NEW cov: 12212 ft: 14175 corp: 11/155b lim: 40 exec/s: 0 rss: 71Mb L: 14/18 MS: 1 CMP- DE: "\001\000\000\004"- 00:08:50.882 [2024-07-25 15:57:08.870646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00040000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.870669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.882 [2024-07-25 15:57:08.870766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00005500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.882 [2024-07-25 15:57:08.870780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.141 #24 NEW cov: 12212 ft: 14191 corp: 12/177b lim: 40 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 PersAutoDict- DE: "\001\000\000\004"- 00:08:51.141 [2024-07-25 15:57:08.920863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00ff16d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.141 [2024-07-25 15:57:08.920886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.141 [2024-07-25 15:57:08.920999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:67bc6eca cdw11:540a0a2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.141 [2024-07-25 15:57:08.921024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.141 #25 NEW cov: 12212 ft: 14217 corp: 13/193b lim: 40 exec/s: 0 rss: 71Mb L: 16/22 MS: 1 CMP- DE: "\377\026\331g\274n\312T"- 00:08:51.141 [2024-07-25 15:57:08.991286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.141 [2024-07-25 15:57:08.991310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.141 [2024-07-25 15:57:08.991418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0a2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.141 [2024-07-25 15:57:08.991433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.141 #26 NEW cov: 12212 ft: 14256 corp: 14/209b lim: 40 exec/s: 0 rss: 72Mb L: 16/22 MS: 1 CrossOver- 00:08:51.141 [2024-07-25 15:57:09.041323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d967bc6e cdw11:ca540a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.141 [2024-07-25 15:57:09.041346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.141 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:51.141 #27 NEW cov: 12235 ft: 14308 corp: 15/218b lim: 40 exec/s: 0 rss: 72Mb L: 9/22 MS: 1 EraseBytes- 00:08:51.141 [2024-07-25 15:57:09.111616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.141 [2024-07-25 15:57:09.111640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.400 #28 NEW cov: 12235 ft: 14410 corp: 16/232b lim: 40 exec/s: 0 rss: 72Mb L: 14/22 MS: 1 ShuffleBytes- 00:08:51.400 [2024-07-25 15:57:09.172477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00005500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.400 [2024-07-25 15:57:09.172501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.400 [2024-07-25 15:57:09.172624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.400 [2024-07-25 15:57:09.172647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.400 #29 NEW cov: 12235 ft: 14454 corp: 17/250b lim: 40 exec/s: 29 rss: 72Mb L: 18/22 MS: 1 ChangeByte- 00:08:51.400 [2024-07-25 15:57:09.242757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4e0a4b4f cdw11:4b4b4b4b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.400 [2024-07-25 15:57:09.242785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.400 [2024-07-25 15:57:09.242900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.400 [2024-07-25 15:57:09.242915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.400 #30 NEW cov: 12235 ft: 14470 corp: 18/266b lim: 40 exec/s: 30 rss: 72Mb L: 16/22 MS: 1 ChangeBit- 00:08:51.400 [2024-07-25 15:57:09.312876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d967bc54 cdw11:0a0a2a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.400 [2024-07-25 15:57:09.312900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.400 #31 NEW cov: 12235 ft: 14516 corp: 19/275b lim: 40 exec/s: 31 rss: 72Mb L: 9/22 MS: 1 CopyPart- 00:08:51.400 [2024-07-25 15:57:09.383521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d9670000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.400 [2024-07-25 15:57:09.383545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.400 [2024-07-25 15:57:09.383660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000bc54 cdw11:0a0a2a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.400 [2024-07-25 15:57:09.383673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.659 #32 NEW cov: 12235 ft: 14536 corp: 20/292b lim: 40 exec/s: 32 rss: 72Mb L: 17/22 MS: 1 InsertRepeatedBytes- 00:08:51.659 [2024-07-25 15:57:09.455009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.659 [2024-07-25 15:57:09.455035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.659 [2024-07-25 15:57:09.455141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.659 [2024-07-25 15:57:09.455166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.659 [2024-07-25 15:57:09.455267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.659 [2024-07-25 15:57:09.455280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.659 [2024-07-25 15:57:09.455390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.659 [2024-07-25 15:57:09.455406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.659 #33 NEW cov: 12235 ft: 14886 corp: 21/327b lim: 40 exec/s: 33 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:51.659 [2024-07-25 15:57:09.523940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d967bc54 cdw11:0a0a2a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.659 [2024-07-25 15:57:09.523995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.659 #34 NEW cov: 12235 ft: 14911 corp: 22/336b lim: 40 exec/s: 34 rss: 72Mb L: 9/35 MS: 1 ChangeBit- 00:08:51.659 [2024-07-25 15:57:09.574150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.659 [2024-07-25 15:57:09.574176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.659 #35 NEW cov: 12235 ft: 14926 corp: 23/351b lim: 40 exec/s: 35 rss: 72Mb L: 15/35 MS: 1 CopyPart- 00:08:51.659 [2024-07-25 15:57:09.644382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00550000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.659 [2024-07-25 15:57:09.644407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.919 #36 NEW cov: 12235 ft: 14960 corp: 24/366b lim: 40 exec/s: 36 rss: 72Mb L: 15/35 MS: 1 PersAutoDict- DE: "U\000\000\000"- 00:08:51.919 [2024-07-25 15:57:09.715057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:0004003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.715082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.919 [2024-07-25 15:57:09.715185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000055 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.715200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.919 #37 NEW cov: 12235 ft: 14977 corp: 25/389b lim: 40 exec/s: 37 rss: 72Mb L: 23/35 MS: 1 InsertByte- 00:08:51.919 [2024-07-25 15:57:09.785337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00002f3f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.785362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.919 [2024-07-25 15:57:09.785469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00005500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.785484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.919 #38 NEW cov: 12235 ft: 15010 corp: 26/411b lim: 40 exec/s: 38 rss: 72Mb L: 22/35 MS: 1 CMP- DE: "/?\000\000"- 00:08:51.919 [2024-07-25 15:57:09.836505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:00000500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.836530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.919 [2024-07-25 15:57:09.836650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.836665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.919 [2024-07-25 15:57:09.836765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.836780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.919 [2024-07-25 15:57:09.836892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.836907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.919 #39 NEW cov: 12235 ft: 15024 corp: 27/446b lim: 40 exec/s: 39 rss: 72Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:51.919 [2024-07-25 15:57:09.905715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d967bc54 cdw11:0a0a2a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.919 [2024-07-25 15:57:09.905740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.177 #40 NEW cov: 12235 ft: 15061 corp: 28/456b lim: 40 exec/s: 40 rss: 73Mb L: 10/35 MS: 1 InsertByte- 00:08:52.177 [2024-07-25 15:57:09.966124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.177 [2024-07-25 15:57:09.966148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.177 #41 NEW cov: 12235 ft: 15067 corp: 29/470b lim: 40 exec/s: 41 rss: 73Mb L: 14/35 MS: 1 ChangeBit- 00:08:52.177 [2024-07-25 15:57:10.017229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:0004003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.177 [2024-07-25 15:57:10.017254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.177 [2024-07-25 15:57:10.017352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00003e00 cdw11:55000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.177 [2024-07-25 15:57:10.017368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.178 [2024-07-25 15:57:10.017472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.178 [2024-07-25 15:57:10.017486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.178 #42 NEW cov: 12235 ft: 15268 corp: 30/494b lim: 40 exec/s: 42 rss: 73Mb L: 24/35 MS: 1 InsertByte- 00:08:52.178 [2024-07-25 15:57:10.087041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00005500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.178 [2024-07-25 15:57:10.087068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.178 [2024-07-25 15:57:10.087171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000e0000 cdw11:000e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.178 [2024-07-25 15:57:10.087185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.178 #43 NEW cov: 12235 ft: 15320 corp: 31/512b lim: 40 exec/s: 43 rss: 73Mb L: 18/35 MS: 1 CrossOver- 00:08:52.178 [2024-07-25 15:57:10.136843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.178 [2024-07-25 15:57:10.136869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.436 #44 NEW cov: 12235 ft: 15352 corp: 32/526b lim: 40 exec/s: 22 rss: 73Mb L: 14/35 MS: 1 PersAutoDict- DE: "U\000\000\000"- 00:08:52.436 #44 DONE cov: 12235 ft: 15352 corp: 32/526b lim: 40 exec/s: 22 rss: 73Mb 00:08:52.436 ###### Recommended dictionary. ###### 00:08:52.436 "U\000\000\000" # Uses: 2 00:08:52.436 "\001\000\000\004" # Uses: 1 00:08:52.436 "\377\026\331g\274n\312T" # Uses: 0 00:08:52.437 "/?\000\000" # Uses: 0 00:08:52.437 ###### End of recommended dictionary. ###### 00:08:52.437 Done 44 runs in 2 second(s) 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:52.437 15:57:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:52.437 [2024-07-25 15:57:10.334076] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:52.437 [2024-07-25 15:57:10.334146] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170643 ] 00:08:52.437 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.696 [2024-07-25 15:57:10.512880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.696 [2024-07-25 15:57:10.580659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.696 [2024-07-25 15:57:10.639652] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.696 [2024-07-25 15:57:10.656118] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:52.696 INFO: Running with entropic power schedule (0xFF, 100). 00:08:52.696 INFO: Seed: 741768864 00:08:52.954 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:52.954 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:52.954 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:52.954 INFO: A corpus is not provided, starting from an empty corpus 00:08:52.954 #2 INITED exec/s: 0 rss: 63Mb 00:08:52.954 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:52.954 This may also happen if the target rejected all inputs we tried so far 00:08:52.954 [2024-07-25 15:57:10.711646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.711680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.954 [2024-07-25 15:57:10.711750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.711769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.954 [2024-07-25 15:57:10.711833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.711846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.954 NEW_FUNC[1/700]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:52.954 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:52.954 #8 NEW cov: 11996 ft: 11995 corp: 2/25b lim: 40 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:08:52.954 [2024-07-25 15:57:10.873947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.873988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.954 [2024-07-25 15:57:10.874096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.874113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.954 [2024-07-25 15:57:10.874201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.874216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.954 #10 NEW cov: 12109 ft: 12462 corp: 3/53b lim: 40 exec/s: 0 rss: 70Mb L: 28/28 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:52.954 [2024-07-25 15:57:10.933834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.933866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.954 [2024-07-25 15:57:10.933949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.954 [2024-07-25 15:57:10.933963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.955 [2024-07-25 15:57:10.934060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.955 [2024-07-25 15:57:10.934073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.214 #11 NEW cov: 12115 ft: 12824 corp: 4/82b lim: 40 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 CrossOver- 00:08:53.214 [2024-07-25 15:57:11.004444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.004470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.004568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.004583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.004680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ce0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.004692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.214 #12 NEW cov: 12200 ft: 13141 corp: 5/112b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 InsertByte- 00:08:53.214 [2024-07-25 15:57:11.074575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.074599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.074687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.074700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.074813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.074826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.214 #13 NEW cov: 12200 ft: 13220 corp: 6/141b lim: 40 exec/s: 0 rss: 70Mb L: 29/30 MS: 1 ShuffleBytes- 00:08:53.214 [2024-07-25 15:57:11.124733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.124756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.124874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.124888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.124985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.124999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.214 #14 NEW cov: 12200 ft: 13272 corp: 7/169b lim: 40 exec/s: 0 rss: 70Mb L: 28/30 MS: 1 ChangeBit- 00:08:53.214 [2024-07-25 15:57:11.175244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.175269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.175377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00007500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.175391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.214 [2024-07-25 15:57:11.175484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ce0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.214 [2024-07-25 15:57:11.175496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.474 #15 NEW cov: 12200 ft: 13320 corp: 8/199b lim: 40 exec/s: 0 rss: 71Mb L: 30/30 MS: 1 ChangeByte- 00:08:53.474 [2024-07-25 15:57:11.245420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:27000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.245446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.245547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.245561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.245658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.245672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.474 #16 NEW cov: 12200 ft: 13355 corp: 9/230b lim: 40 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:53.474 [2024-07-25 15:57:11.316269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.316295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.316398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.316413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.316507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.316521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.316622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.316635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.474 #17 NEW cov: 12200 ft: 13862 corp: 10/264b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:53.474 [2024-07-25 15:57:11.386392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.386418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.386511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:89898981 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.386525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.386617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.386630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.474 #18 NEW cov: 12200 ft: 13914 corp: 11/288b lim: 40 exec/s: 0 rss: 72Mb L: 24/34 MS: 1 ChangeBit- 00:08:53.474 [2024-07-25 15:57:11.456839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:89818989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.456867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.456959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:89898989 cdw11:89890000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.456973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.474 [2024-07-25 15:57:11.457070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ce0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.474 [2024-07-25 15:57:11.457083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.734 #19 NEW cov: 12200 ft: 13972 corp: 12/318b lim: 40 exec/s: 0 rss: 72Mb L: 30/34 MS: 1 CrossOver- 00:08:53.734 [2024-07-25 15:57:11.527430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.527459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.527551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.527566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.527663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fffffcff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.527676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.527777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.527793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.734 #20 NEW cov: 12200 ft: 14003 corp: 13/352b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBinInt- 00:08:53.734 [2024-07-25 15:57:11.597669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.597696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.597801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.597817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.597914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.597927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.598020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00fffffc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.598034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.734 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:53.734 #21 NEW cov: 12223 ft: 14111 corp: 14/389b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 CrossOver- 00:08:53.734 [2024-07-25 15:57:11.657572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.657600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.657704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.657717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.657813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00910000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.657826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.734 #22 NEW cov: 12223 ft: 14118 corp: 15/418b lim: 40 exec/s: 0 rss: 72Mb L: 29/37 MS: 1 ChangeByte- 00:08:53.734 [2024-07-25 15:57:11.707928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.707952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.708051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:89898981 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.708065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.734 [2024-07-25 15:57:11.708157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:89898989 cdw11:89818989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.734 [2024-07-25 15:57:11.708170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.994 #23 NEW cov: 12223 ft: 14124 corp: 16/442b lim: 40 exec/s: 23 rss: 72Mb L: 24/37 MS: 1 ChangeBit- 00:08:53.994 [2024-07-25 15:57:11.778453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a898989 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.778477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.778572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:89898981 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.778586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.778686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:89898918 cdw11:89818989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.778698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.994 #24 NEW cov: 12223 ft: 14159 corp: 17/466b lim: 40 exec/s: 24 rss: 72Mb L: 24/37 MS: 1 ChangeBinInt- 00:08:53.994 [2024-07-25 15:57:11.849372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.849396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.849488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.849506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.849598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.849613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.849706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.849719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.994 #25 NEW cov: 12223 ft: 14210 corp: 18/504b lim: 40 exec/s: 25 rss: 72Mb L: 38/38 MS: 1 CopyPart- 00:08:53.994 [2024-07-25 15:57:11.919939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.919963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.920060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.920072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.920159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.920172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.920264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00ffff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.920276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.994 #26 NEW cov: 12223 ft: 14233 corp: 19/541b lim: 40 exec/s: 26 rss: 72Mb L: 37/38 MS: 1 ChangeByte- 00:08:53.994 [2024-07-25 15:57:11.969853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:27000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.969876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.969972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.969985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.994 [2024-07-25 15:57:11.970072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.994 [2024-07-25 15:57:11.970085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.254 #27 NEW cov: 12223 ft: 14282 corp: 20/572b lim: 40 exec/s: 27 rss: 72Mb L: 31/38 MS: 1 ChangeBinInt- 00:08:54.254 [2024-07-25 15:57:12.040454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.040479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.040569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.040585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.040669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.040683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.254 #28 NEW cov: 12223 ft: 14294 corp: 21/602b lim: 40 exec/s: 28 rss: 72Mb L: 30/38 MS: 1 CrossOver- 00:08:54.254 [2024-07-25 15:57:12.090392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.090417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.090510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.090524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.254 #29 NEW cov: 12223 ft: 14523 corp: 22/623b lim: 40 exec/s: 29 rss: 72Mb L: 21/38 MS: 1 EraseBytes- 00:08:54.254 [2024-07-25 15:57:12.141163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270900 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.141188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.141281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.141296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.141383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.141396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.141496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.141508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.254 #30 NEW cov: 12223 ft: 14540 corp: 23/662b lim: 40 exec/s: 30 rss: 72Mb L: 39/39 MS: 1 CMP- DE: "\011\000"- 00:08:54.254 [2024-07-25 15:57:12.211031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:27000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.211055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.211147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:05000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.211161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.254 [2024-07-25 15:57:12.211247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.254 [2024-07-25 15:57:12.211260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.513 #31 NEW cov: 12223 ft: 14549 corp: 24/693b lim: 40 exec/s: 31 rss: 72Mb L: 31/39 MS: 1 ShuffleBytes- 00:08:54.513 [2024-07-25 15:57:12.271712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:27000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.271739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.271825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.271840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.271926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:10000000 cdw11:00040000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.271942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.272038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.272053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.513 #32 NEW cov: 12223 ft: 14562 corp: 25/725b lim: 40 exec/s: 32 rss: 72Mb L: 32/39 MS: 1 InsertByte- 00:08:54.513 [2024-07-25 15:57:12.331834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:27000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.331859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.331959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.331973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.332066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00090000 cdw11:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.332079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.332168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.332181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.513 #33 NEW cov: 12223 ft: 14568 corp: 26/758b lim: 40 exec/s: 33 rss: 72Mb L: 33/39 MS: 1 PersAutoDict- DE: "\011\000"- 00:08:54.513 [2024-07-25 15:57:12.381724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.381749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.381853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.381868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.381958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.381973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.513 #34 NEW cov: 12223 ft: 14607 corp: 27/788b lim: 40 exec/s: 34 rss: 72Mb L: 30/39 MS: 1 ShuffleBytes- 00:08:54.513 [2024-07-25 15:57:12.452046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270900 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.513 [2024-07-25 15:57:12.452070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.513 [2024-07-25 15:57:12.452168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.514 [2024-07-25 15:57:12.452181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.514 [2024-07-25 15:57:12.452268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.514 [2024-07-25 15:57:12.452280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.514 #35 NEW cov: 12223 ft: 14616 corp: 28/817b lim: 40 exec/s: 35 rss: 72Mb L: 29/39 MS: 1 PersAutoDict- DE: "\011\000"- 00:08:54.514 [2024-07-25 15:57:12.502492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00004000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.514 [2024-07-25 15:57:12.502517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.514 [2024-07-25 15:57:12.502605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.514 [2024-07-25 15:57:12.502618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.514 [2024-07-25 15:57:12.502706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:fffffcff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.514 [2024-07-25 15:57:12.502720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.514 [2024-07-25 15:57:12.502811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.514 [2024-07-25 15:57:12.502824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.773 #36 NEW cov: 12223 ft: 14628 corp: 29/851b lim: 40 exec/s: 36 rss: 73Mb L: 34/39 MS: 1 ChangeBit- 00:08:54.773 [2024-07-25 15:57:12.562522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270900 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.773 [2024-07-25 15:57:12.562547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.773 [2024-07-25 15:57:12.562642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.773 [2024-07-25 15:57:12.562658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.773 [2024-07-25 15:57:12.562752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.773 [2024-07-25 15:57:12.562770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.773 [2024-07-25 15:57:12.562868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.773 [2024-07-25 15:57:12.562883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.773 #37 NEW cov: 12223 ft: 14641 corp: 30/890b lim: 40 exec/s: 37 rss: 73Mb L: 39/39 MS: 1 PersAutoDict- DE: "\011\000"- 00:08:54.773 [2024-07-25 15:57:12.612474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a010200 cdw11:00898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.773 [2024-07-25 15:57:12.612498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.773 [2024-07-25 15:57:12.612584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:89898981 cdw11:89898989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.774 [2024-07-25 15:57:12.612597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.774 [2024-07-25 15:57:12.612680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:89898918 cdw11:89818989 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.774 [2024-07-25 15:57:12.612691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.774 #38 NEW cov: 12223 ft: 14647 corp: 31/914b lim: 40 exec/s: 38 rss: 73Mb L: 24/39 MS: 1 CMP- DE: "\001\002\000\000"- 00:08:54.774 [2024-07-25 15:57:12.672417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a270000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.774 [2024-07-25 15:57:12.672441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.774 [2024-07-25 15:57:12.672537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.774 [2024-07-25 15:57:12.672549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.774 #39 NEW cov: 12223 ft: 14656 corp: 32/935b lim: 40 exec/s: 19 rss: 73Mb L: 21/39 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:08:54.774 #39 DONE cov: 12223 ft: 14656 corp: 32/935b lim: 40 exec/s: 19 rss: 73Mb 00:08:54.774 ###### Recommended dictionary. ###### 00:08:54.774 "\011\000" # Uses: 3 00:08:54.774 "\001\002\000\000" # Uses: 1 00:08:54.774 ###### End of recommended dictionary. ###### 00:08:54.774 Done 39 runs in 2 second(s) 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:55.033 15:57:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:55.033 [2024-07-25 15:57:12.864059] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:55.033 [2024-07-25 15:57:12.864118] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171375 ] 00:08:55.033 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.291 [2024-07-25 15:57:13.037079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.291 [2024-07-25 15:57:13.104890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.291 [2024-07-25 15:57:13.163642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.291 [2024-07-25 15:57:13.179848] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:55.291 INFO: Running with entropic power schedule (0xFF, 100). 00:08:55.291 INFO: Seed: 3266767025 00:08:55.291 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:55.291 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:55.291 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:55.291 INFO: A corpus is not provided, starting from an empty corpus 00:08:55.291 #2 INITED exec/s: 0 rss: 63Mb 00:08:55.291 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:55.291 This may also happen if the target rejected all inputs we tried so far 00:08:55.291 [2024-07-25 15:57:13.235320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.291 [2024-07-25 15:57:13.235355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.549 NEW_FUNC[1/701]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:55.549 NEW_FUNC[2/701]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:55.549 #9 NEW cov: 11989 ft: 11990 corp: 2/11b lim: 35 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:55.549 [2024-07-25 15:57:13.386157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.549 [2024-07-25 15:57:13.386208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.549 [2024-07-25 15:57:13.386294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.549 [2024-07-25 15:57:13.386315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.549 NEW_FUNC[1/1]: 0x1ac3750 in sock_group_impl_poll_count /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:749 00:08:55.549 #18 NEW cov: 12120 ft: 13271 corp: 3/30b lim: 35 exec/s: 0 rss: 71Mb L: 19/19 MS: 4 ChangeByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:08:55.549 [2024-07-25 15:57:13.435877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.549 [2024-07-25 15:57:13.435902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.549 NEW_FUNC[1/2]: 0x4b3c50 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:08:55.549 NEW_FUNC[2/2]: 0x11f1f10 in nvmf_ctrlr_set_features_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1617 00:08:55.549 #19 NEW cov: 12176 ft: 13551 corp: 4/49b lim: 35 exec/s: 0 rss: 71Mb L: 19/19 MS: 1 ChangeBinInt- 00:08:55.549 [2024-07-25 15:57:13.495857] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:55.549 [2024-07-25 15:57:13.496108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.549 [2024-07-25 15:57:13.496132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.549 [2024-07-25 15:57:13.496185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.549 [2024-07-25 15:57:13.496197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.549 #20 NEW cov: 12273 ft: 13764 corp: 5/68b lim: 35 exec/s: 0 rss: 71Mb L: 19/19 MS: 1 CMP- DE: "\310\321\235ne\331\027\000"- 00:08:55.807 [2024-07-25 15:57:13.546193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.546220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.807 [2024-07-25 15:57:13.546278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.546292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.807 #21 NEW cov: 12273 ft: 13846 corp: 6/84b lim: 35 exec/s: 0 rss: 71Mb L: 16/19 MS: 1 InsertRepeatedBytes- 00:08:55.807 [2024-07-25 15:57:13.596151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.596174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.807 #22 NEW cov: 12273 ft: 13991 corp: 7/94b lim: 35 exec/s: 0 rss: 71Mb L: 10/19 MS: 1 EraseBytes- 00:08:55.807 [2024-07-25 15:57:13.646339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.646365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.807 #26 NEW cov: 12273 ft: 14150 corp: 8/101b lim: 35 exec/s: 0 rss: 71Mb L: 7/19 MS: 4 ShuffleBytes-CopyPart-CopyPart-InsertRepeatedBytes- 00:08:55.807 [2024-07-25 15:57:13.686425] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:55.807 [2024-07-25 15:57:13.686795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.686819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.807 [2024-07-25 15:57:13.686874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.686886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.807 [2024-07-25 15:57:13.686943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.686954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.807 #27 NEW cov: 12273 ft: 14389 corp: 9/128b lim: 35 exec/s: 0 rss: 71Mb L: 27/27 MS: 1 PersAutoDict- DE: "\310\321\235ne\331\027\000"- 00:08:55.807 [2024-07-25 15:57:13.726548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.726571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.807 #28 NEW cov: 12273 ft: 14456 corp: 10/138b lim: 35 exec/s: 0 rss: 72Mb L: 10/27 MS: 1 ChangeBit- 00:08:55.807 [2024-07-25 15:57:13.776678] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.807 [2024-07-25 15:57:13.776702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.066 #29 NEW cov: 12273 ft: 14559 corp: 11/148b lim: 35 exec/s: 0 rss: 72Mb L: 10/27 MS: 1 ChangeBinInt- 00:08:56.066 [2024-07-25 15:57:13.817140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.817161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.066 [2024-07-25 15:57:13.817218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.817230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.066 [2024-07-25 15:57:13.817287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.817299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.066 #35 NEW cov: 12273 ft: 14635 corp: 12/172b lim: 35 exec/s: 0 rss: 72Mb L: 24/27 MS: 1 InsertRepeatedBytes- 00:08:56.066 [2024-07-25 15:57:13.856892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000f5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.856915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.066 #40 NEW cov: 12273 ft: 14677 corp: 13/180b lim: 35 exec/s: 0 rss: 72Mb L: 8/27 MS: 5 ShuffleBytes-InsertByte-ChangeByte-ChangeBit-CrossOver- 00:08:56.066 [2024-07-25 15:57:13.897028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT COALESCING cid:4 cdw10:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.897051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT CHANGEABLE (01/0e) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.066 NEW_FUNC[1/1]: 0x4b7de0 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:08:56.066 #41 NEW cov: 12297 ft: 14740 corp: 14/188b lim: 35 exec/s: 0 rss: 72Mb L: 8/27 MS: 1 ChangeBinInt- 00:08:56.066 [2024-07-25 15:57:13.947183] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:56.066 [2024-07-25 15:57:13.947350] ctrlr.c:1864:nvmf_ctrlr_set_features_reservation_notification_mask: *ERROR*: Set Features - Invalid Namespace ID 00:08:56.066 [2024-07-25 15:57:13.947574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.947597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.066 [2024-07-25 15:57:13.947652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.947664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.066 [2024-07-25 15:57:13.947722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE MASK cid:6 cdw10:00000082 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.947736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.066 NEW_FUNC[1/2]: 0x4be290 in feat_rsv_notification_mask /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:378 00:08:56.066 NEW_FUNC[2/2]: 0x11f9c80 in nvmf_ctrlr_set_features_reservation_notification_mask /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1845 00:08:56.066 #42 NEW cov: 12349 ft: 14825 corp: 15/210b lim: 35 exec/s: 0 rss: 72Mb L: 22/27 MS: 1 InsertRepeatedBytes- 00:08:56.066 [2024-07-25 15:57:13.987261] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:56.066 [2024-07-25 15:57:13.987508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.987532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.066 [2024-07-25 15:57:13.987588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:13.987599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.066 #43 NEW cov: 12349 ft: 14844 corp: 16/229b lim: 35 exec/s: 0 rss: 72Mb L: 19/27 MS: 1 CrossOver- 00:08:56.066 [2024-07-25 15:57:14.027431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.066 [2024-07-25 15:57:14.027454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.066 #44 NEW cov: 12349 ft: 14882 corp: 17/239b lim: 35 exec/s: 0 rss: 72Mb L: 10/27 MS: 1 PersAutoDict- DE: "\310\321\235ne\331\027\000"- 00:08:56.325 [2024-07-25 15:57:14.067549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.067573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.325 #45 NEW cov: 12349 ft: 14886 corp: 18/249b lim: 35 exec/s: 0 rss: 72Mb L: 10/27 MS: 1 CMP- DE: "\335\3734\002\000\000\000\000"- 00:08:56.325 [2024-07-25 15:57:14.117688] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.117711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.325 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:56.325 #46 NEW cov: 12372 ft: 14930 corp: 19/259b lim: 35 exec/s: 0 rss: 72Mb L: 10/27 MS: 1 ChangeBinInt- 00:08:56.325 [2024-07-25 15:57:14.157846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.157871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.325 #47 NEW cov: 12372 ft: 14940 corp: 20/269b lim: 35 exec/s: 0 rss: 72Mb L: 10/27 MS: 1 ChangeBinInt- 00:08:56.325 [2024-07-25 15:57:14.208154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.208178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.325 [2024-07-25 15:57:14.208237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000043 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.208248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.325 #48 NEW cov: 12372 ft: 14944 corp: 21/287b lim: 35 exec/s: 48 rss: 72Mb L: 18/27 MS: 1 InsertRepeatedBytes- 00:08:56.325 [2024-07-25 15:57:14.258132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.258155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.325 #49 NEW cov: 12372 ft: 14970 corp: 22/297b lim: 35 exec/s: 49 rss: 72Mb L: 10/27 MS: 1 ShuffleBytes- 00:08:56.325 [2024-07-25 15:57:14.298391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.298413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.325 [2024-07-25 15:57:14.298472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.325 [2024-07-25 15:57:14.298483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.583 #50 NEW cov: 12372 ft: 14993 corp: 23/317b lim: 35 exec/s: 50 rss: 72Mb L: 20/27 MS: 1 InsertByte- 00:08:56.583 [2024-07-25 15:57:14.338373] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:56.583 [2024-07-25 15:57:14.338863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.338885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.338940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.338952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.339007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000050 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.339018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.339075] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.339089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.583 #51 NEW cov: 12372 ft: 15283 corp: 24/350b lim: 35 exec/s: 51 rss: 72Mb L: 33/33 MS: 1 CopyPart- 00:08:56.583 [2024-07-25 15:57:14.388618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.388640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.388698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.388711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.583 #52 NEW cov: 12372 ft: 15289 corp: 25/367b lim: 35 exec/s: 52 rss: 72Mb L: 17/33 MS: 1 EraseBytes- 00:08:56.583 [2024-07-25 15:57:14.438973] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.438996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.439070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.439082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.439142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.439155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.583 #53 NEW cov: 12372 ft: 15348 corp: 26/394b lim: 35 exec/s: 53 rss: 72Mb L: 27/33 MS: 1 InsertRepeatedBytes- 00:08:56.583 [2024-07-25 15:57:14.478786] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:56.583 [2024-07-25 15:57:14.479292] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.479315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.479371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.479382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.479441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.479451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.479509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000065 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.479522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.583 #54 NEW cov: 12372 ft: 15361 corp: 27/422b lim: 35 exec/s: 54 rss: 72Mb L: 28/33 MS: 1 InsertByte- 00:08:56.583 [2024-07-25 15:57:14.529058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.529080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.583 [2024-07-25 15:57:14.529139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.583 [2024-07-25 15:57:14.529152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.583 #55 NEW cov: 12372 ft: 15398 corp: 28/440b lim: 35 exec/s: 55 rss: 72Mb L: 18/33 MS: 1 InsertByte- 00:08:56.842 [2024-07-25 15:57:14.579211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.579236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.579294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000043 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.579304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.842 #56 NEW cov: 12372 ft: 15442 corp: 29/458b lim: 35 exec/s: 56 rss: 72Mb L: 18/33 MS: 1 ChangeBit- 00:08:56.842 [2024-07-25 15:57:14.629491] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.629513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.629571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.629582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.629641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.629652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.842 #57 NEW cov: 12372 ft: 15454 corp: 30/479b lim: 35 exec/s: 57 rss: 72Mb L: 21/33 MS: 1 InsertByte- 00:08:56.842 [2024-07-25 15:57:14.679457] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.679481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.679541] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000043 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.679551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.842 #58 NEW cov: 12372 ft: 15465 corp: 31/497b lim: 35 exec/s: 58 rss: 72Mb L: 18/33 MS: 1 ChangeBit- 00:08:56.842 [2024-07-25 15:57:14.729419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.729442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.842 #59 NEW cov: 12372 ft: 15480 corp: 32/507b lim: 35 exec/s: 59 rss: 72Mb L: 10/33 MS: 1 ChangeByte- 00:08:56.842 [2024-07-25 15:57:14.769611] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:56.842 [2024-07-25 15:57:14.770128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.770151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.770205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.770216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.770270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000fb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.770281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.770338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000065 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.770350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.842 #65 NEW cov: 12372 ft: 15494 corp: 33/535b lim: 35 exec/s: 65 rss: 72Mb L: 28/33 MS: 1 PersAutoDict- DE: "\335\3734\002\000\000\000\000"- 00:08:56.842 [2024-07-25 15:57:14.820060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.820085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.820157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000043 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.820169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.842 [2024-07-25 15:57:14.820225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.842 [2024-07-25 15:57:14.820236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.102 #66 NEW cov: 12372 ft: 15497 corp: 34/561b lim: 35 exec/s: 66 rss: 73Mb L: 26/33 MS: 1 PersAutoDict- DE: "\310\321\235ne\331\027\000"- 00:08:57.102 [2024-07-25 15:57:14.870002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:14.870027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.102 [2024-07-25 15:57:14.870099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000043 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:14.870110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.102 #67 NEW cov: 12372 ft: 15511 corp: 35/579b lim: 35 exec/s: 67 rss: 73Mb L: 18/33 MS: 1 ChangeBit- 00:08:57.102 [2024-07-25 15:57:14.910303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:14.910326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.102 [2024-07-25 15:57:14.910382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000065 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:14.910395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.102 [2024-07-25 15:57:14.910452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:14.910464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.102 #68 NEW cov: 12372 ft: 15522 corp: 36/606b lim: 35 exec/s: 68 rss: 73Mb L: 27/33 MS: 1 CopyPart- 00:08:57.102 [2024-07-25 15:57:14.960265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:14.960288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.102 [2024-07-25 15:57:14.960345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:14.960356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.102 #69 NEW cov: 12372 ft: 15528 corp: 37/625b lim: 35 exec/s: 69 rss: 73Mb L: 19/33 MS: 1 ShuffleBytes- 00:08:57.102 [2024-07-25 15:57:15.000390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:15.000414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.102 [2024-07-25 15:57:15.000485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:15.000497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.102 #70 NEW cov: 12372 ft: 15554 corp: 38/642b lim: 35 exec/s: 70 rss: 73Mb L: 17/33 MS: 1 ChangeBit- 00:08:57.102 [2024-07-25 15:57:15.040471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000dd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:15.040496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.102 [2024-07-25 15:57:15.040571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000043 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:15.040582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.102 #71 NEW cov: 12372 ft: 15585 corp: 39/661b lim: 35 exec/s: 71 rss: 73Mb L: 19/33 MS: 1 InsertByte- 00:08:57.102 [2024-07-25 15:57:15.080423] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 15 00:08:57.102 [2024-07-25 15:57:15.080655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:15.080678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.102 [2024-07-25 15:57:15.080732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.102 [2024-07-25 15:57:15.080743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.361 #72 NEW cov: 12372 ft: 15606 corp: 40/680b lim: 35 exec/s: 72 rss: 73Mb L: 19/33 MS: 1 ChangeByte- 00:08:57.361 [2024-07-25 15:57:15.120612] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:57.361 [2024-07-25 15:57:15.121130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.121153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.361 [2024-07-25 15:57:15.121207] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.121218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.361 [2024-07-25 15:57:15.121275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000050 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.121286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.361 [2024-07-25 15:57:15.121342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.121356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.361 #73 NEW cov: 12372 ft: 15608 corp: 41/710b lim: 35 exec/s: 73 rss: 73Mb L: 30/33 MS: 1 EraseBytes- 00:08:57.361 [2024-07-25 15:57:15.170748] ctrlr.c:1626:nvmf_ctrlr_set_features_power_management: *ERROR*: Invalid power state 25 00:08:57.361 [2024-07-25 15:57:15.171270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.171293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.361 [2024-07-25 15:57:15.171349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.171360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.361 [2024-07-25 15:57:15.171415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000fb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.171426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.361 [2024-07-25 15:57:15.171482] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000065 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.171494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.361 #74 NEW cov: 12372 ft: 15616 corp: 42/739b lim: 35 exec/s: 74 rss: 73Mb L: 29/33 MS: 1 InsertByte- 00:08:57.361 [2024-07-25 15:57:15.221015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.221036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.361 [2024-07-25 15:57:15.221107] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000009d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.361 [2024-07-25 15:57:15.221118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.361 #75 NEW cov: 12372 ft: 15620 corp: 43/756b lim: 35 exec/s: 37 rss: 73Mb L: 17/33 MS: 1 PersAutoDict- DE: "\310\321\235ne\331\027\000"- 00:08:57.361 #75 DONE cov: 12372 ft: 15620 corp: 43/756b lim: 35 exec/s: 37 rss: 73Mb 00:08:57.361 ###### Recommended dictionary. ###### 00:08:57.361 "\310\321\235ne\331\027\000" # Uses: 4 00:08:57.362 "\335\3734\002\000\000\000\000" # Uses: 1 00:08:57.362 ###### End of recommended dictionary. ###### 00:08:57.362 Done 75 runs in 2 second(s) 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:57.621 15:57:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:57.621 [2024-07-25 15:57:15.399692] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:08:57.621 [2024-07-25 15:57:15.399775] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171804 ] 00:08:57.621 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.621 [2024-07-25 15:57:15.582416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.879 [2024-07-25 15:57:15.647433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.880 [2024-07-25 15:57:15.706065] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.880 [2024-07-25 15:57:15.722283] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:57.880 INFO: Running with entropic power schedule (0xFF, 100). 00:08:57.880 INFO: Seed: 1514802614 00:08:57.880 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:08:57.880 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:08:57.880 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:57.880 INFO: A corpus is not provided, starting from an empty corpus 00:08:57.880 #2 INITED exec/s: 0 rss: 64Mb 00:08:57.880 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:57.880 This may also happen if the target rejected all inputs we tried so far 00:08:57.880 [2024-07-25 15:57:15.766989] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.880 [2024-07-25 15:57:15.767020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.138 NEW_FUNC[1/700]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:58.138 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:58.138 #4 NEW cov: 11978 ft: 11950 corp: 2/13b lim: 35 exec/s: 0 rss: 71Mb L: 12/12 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:58.138 [2024-07-25 15:57:15.947478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.139 [2024-07-25 15:57:15.947514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.139 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:58.139 #5 NEW cov: 12105 ft: 12725 corp: 3/29b lim: 35 exec/s: 0 rss: 71Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:58.139 [2024-07-25 15:57:16.007461] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.139 [2024-07-25 15:57:16.007489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.139 #6 NEW cov: 12111 ft: 12847 corp: 4/41b lim: 35 exec/s: 0 rss: 71Mb L: 12/16 MS: 1 ShuffleBytes- 00:08:58.139 [2024-07-25 15:57:16.087695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.139 [2024-07-25 15:57:16.087721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.397 #7 NEW cov: 12196 ft: 13200 corp: 5/57b lim: 35 exec/s: 0 rss: 71Mb L: 16/16 MS: 1 ChangeBit- 00:08:58.397 [2024-07-25 15:57:16.178018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.397 [2024-07-25 15:57:16.178048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.397 #8 NEW cov: 12196 ft: 13378 corp: 6/73b lim: 35 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 ChangeByte- 00:08:58.397 [2024-07-25 15:57:16.258097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.397 [2024-07-25 15:57:16.258124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.397 #9 NEW cov: 12196 ft: 13508 corp: 7/85b lim: 35 exec/s: 0 rss: 72Mb L: 12/16 MS: 1 ChangeByte- 00:08:58.397 [2024-07-25 15:57:16.318314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.397 [2024-07-25 15:57:16.318343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.397 #10 NEW cov: 12196 ft: 13613 corp: 8/97b lim: 35 exec/s: 0 rss: 72Mb L: 12/16 MS: 1 CopyPart- 00:08:58.656 [2024-07-25 15:57:16.398480] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.656 [2024-07-25 15:57:16.398507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.656 #11 NEW cov: 12196 ft: 13653 corp: 9/109b lim: 35 exec/s: 0 rss: 72Mb L: 12/16 MS: 1 ChangeByte- 00:08:58.656 [2024-07-25 15:57:16.448617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.656 [2024-07-25 15:57:16.448644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.656 #12 NEW cov: 12196 ft: 13761 corp: 10/121b lim: 35 exec/s: 0 rss: 72Mb L: 12/16 MS: 1 CopyPart- 00:08:58.656 [2024-07-25 15:57:16.528870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.656 [2024-07-25 15:57:16.528897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.656 [2024-07-25 15:57:16.528942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.656 [2024-07-25 15:57:16.528954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.656 #13 NEW cov: 12196 ft: 13960 corp: 11/140b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:08:58.656 [2024-07-25 15:57:16.588968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.656 [2024-07-25 15:57:16.588997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.656 #14 NEW cov: 12196 ft: 14000 corp: 12/151b lim: 35 exec/s: 0 rss: 72Mb L: 11/19 MS: 1 EraseBytes- 00:08:58.915 [2024-07-25 15:57:16.649141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.915 [2024-07-25 15:57:16.649169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.915 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:58.915 #15 NEW cov: 12219 ft: 14059 corp: 13/163b lim: 35 exec/s: 0 rss: 72Mb L: 12/19 MS: 1 CopyPart- 00:08:58.915 [2024-07-25 15:57:16.729428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.915 [2024-07-25 15:57:16.729454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.915 #16 NEW cov: 12219 ft: 14083 corp: 14/179b lim: 35 exec/s: 16 rss: 72Mb L: 16/19 MS: 1 ShuffleBytes- 00:08:58.915 [2024-07-25 15:57:16.779535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.915 [2024-07-25 15:57:16.779560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.915 #17 NEW cov: 12219 ft: 14121 corp: 15/195b lim: 35 exec/s: 17 rss: 72Mb L: 16/19 MS: 1 CopyPart- 00:08:58.915 [2024-07-25 15:57:16.839678] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.915 [2024-07-25 15:57:16.839704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.915 #18 NEW cov: 12219 ft: 14154 corp: 16/211b lim: 35 exec/s: 18 rss: 72Mb L: 16/19 MS: 1 ChangeBinInt- 00:08:58.915 [2024-07-25 15:57:16.889741] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:58.915 [2024-07-25 15:57:16.889775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.173 #19 NEW cov: 12219 ft: 14161 corp: 17/223b lim: 35 exec/s: 19 rss: 72Mb L: 12/19 MS: 1 ChangeBinInt- 00:08:59.173 [2024-07-25 15:57:16.939995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000198 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.173 [2024-07-25 15:57:16.940022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.173 #20 NEW cov: 12219 ft: 14201 corp: 18/239b lim: 35 exec/s: 20 rss: 72Mb L: 16/19 MS: 1 ChangeByte- 00:08:59.173 [2024-07-25 15:57:16.990093] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.173 [2024-07-25 15:57:16.990120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.174 [2024-07-25 15:57:16.990151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.174 [2024-07-25 15:57:16.990163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.174 #21 NEW cov: 12219 ft: 14210 corp: 19/258b lim: 35 exec/s: 21 rss: 72Mb L: 19/19 MS: 1 ChangeByte- 00:08:59.174 [2024-07-25 15:57:17.070324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000198 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.174 [2024-07-25 15:57:17.070352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.174 #22 NEW cov: 12219 ft: 14226 corp: 20/274b lim: 35 exec/s: 22 rss: 72Mb L: 16/19 MS: 1 CrossOver- 00:08:59.174 [2024-07-25 15:57:17.150515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.174 [2024-07-25 15:57:17.150541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.432 #23 NEW cov: 12219 ft: 14257 corp: 21/290b lim: 35 exec/s: 23 rss: 72Mb L: 16/19 MS: 1 CopyPart- 00:08:59.432 [2024-07-25 15:57:17.200588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.432 [2024-07-25 15:57:17.200615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.432 #24 NEW cov: 12219 ft: 14302 corp: 22/302b lim: 35 exec/s: 24 rss: 72Mb L: 12/19 MS: 1 ChangeBinInt- 00:08:59.432 [2024-07-25 15:57:17.280881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.432 [2024-07-25 15:57:17.280907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.432 #25 NEW cov: 12219 ft: 14306 corp: 23/318b lim: 35 exec/s: 25 rss: 72Mb L: 16/19 MS: 1 ChangeBit- 00:08:59.432 [2024-07-25 15:57:17.330935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.432 [2024-07-25 15:57:17.330962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.432 #26 NEW cov: 12219 ft: 14393 corp: 24/330b lim: 35 exec/s: 26 rss: 72Mb L: 12/19 MS: 1 ChangeByte- 00:08:59.432 [2024-07-25 15:57:17.381139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.432 [2024-07-25 15:57:17.381166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.691 #27 NEW cov: 12219 ft: 14427 corp: 25/347b lim: 35 exec/s: 27 rss: 72Mb L: 17/19 MS: 1 InsertByte- 00:08:59.691 [2024-07-25 15:57:17.461286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.691 [2024-07-25 15:57:17.461318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.691 #28 NEW cov: 12219 ft: 14468 corp: 26/359b lim: 35 exec/s: 28 rss: 72Mb L: 12/19 MS: 1 ChangeBinInt- 00:08:59.691 #29 NEW cov: 12219 ft: 14489 corp: 27/369b lim: 35 exec/s: 29 rss: 72Mb L: 10/19 MS: 1 EraseBytes- 00:08:59.691 [2024-07-25 15:57:17.561533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.691 [2024-07-25 15:57:17.561562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.691 #30 NEW cov: 12219 ft: 14500 corp: 28/381b lim: 35 exec/s: 30 rss: 72Mb L: 12/19 MS: 1 ChangeByte- 00:08:59.691 [2024-07-25 15:57:17.611828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.691 [2024-07-25 15:57:17.611857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.691 [2024-07-25 15:57:17.611904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.691 [2024-07-25 15:57:17.611917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.691 [2024-07-25 15:57:17.611955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.691 [2024-07-25 15:57:17.611968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.691 [2024-07-25 15:57:17.611994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.691 [2024-07-25 15:57:17.612007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.691 #31 NEW cov: 12219 ft: 14978 corp: 29/415b lim: 35 exec/s: 31 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:59.691 [2024-07-25 15:57:17.671798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.691 [2024-07-25 15:57:17.671825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.950 #32 NEW cov: 12219 ft: 15002 corp: 30/427b lim: 35 exec/s: 32 rss: 72Mb L: 12/34 MS: 1 CrossOver- 00:08:59.950 [2024-07-25 15:57:17.752119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000198 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:59.950 [2024-07-25 15:57:17.752147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.950 #33 NEW cov: 12219 ft: 15023 corp: 31/444b lim: 35 exec/s: 16 rss: 72Mb L: 17/34 MS: 1 InsertByte- 00:08:59.950 #33 DONE cov: 12219 ft: 15023 corp: 31/444b lim: 35 exec/s: 16 rss: 72Mb 00:08:59.950 Done 33 runs in 2 second(s) 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:59.950 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:00.209 15:57:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:09:00.209 [2024-07-25 15:57:17.977282] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:00.209 [2024-07-25 15:57:17.977342] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172241 ] 00:09:00.209 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.468 [2024-07-25 15:57:18.229475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.468 [2024-07-25 15:57:18.306722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.468 [2024-07-25 15:57:18.365176] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.468 [2024-07-25 15:57:18.381393] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:09:00.468 INFO: Running with entropic power schedule (0xFF, 100). 00:09:00.468 INFO: Seed: 4173804468 00:09:00.468 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:00.468 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:00.468 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:00.468 INFO: A corpus is not provided, starting from an empty corpus 00:09:00.468 #2 INITED exec/s: 0 rss: 63Mb 00:09:00.468 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:00.468 This may also happen if the target rejected all inputs we tried so far 00:09:00.468 [2024-07-25 15:57:18.436559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.468 [2024-07-25 15:57:18.436587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.727 NEW_FUNC[1/700]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:09:00.727 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:00.727 #26 NEW cov: 12075 ft: 12075 corp: 2/26b lim: 105 exec/s: 0 rss: 70Mb L: 25/25 MS: 4 CrossOver-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:09:00.727 [2024-07-25 15:57:18.587149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.727 [2024-07-25 15:57:18.587200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.727 NEW_FUNC[1/1]: 0xf50d00 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:297 00:09:00.727 #47 NEW cov: 12195 ft: 12758 corp: 3/51b lim: 105 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 CopyPart- 00:09:00.727 [2024-07-25 15:57:18.647053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069462032383 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.727 [2024-07-25 15:57:18.647080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.727 #48 NEW cov: 12201 ft: 13014 corp: 4/77b lim: 105 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 InsertByte- 00:09:00.727 [2024-07-25 15:57:18.687435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.727 [2024-07-25 15:57:18.687461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.727 [2024-07-25 15:57:18.687524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.727 [2024-07-25 15:57:18.687537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.727 [2024-07-25 15:57:18.687592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.727 [2024-07-25 15:57:18.687605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.727 #52 NEW cov: 12286 ft: 13727 corp: 5/141b lim: 105 exec/s: 0 rss: 70Mb L: 64/64 MS: 4 ChangeByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:09:00.984 [2024-07-25 15:57:18.727395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.984 [2024-07-25 15:57:18.727419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.985 [2024-07-25 15:57:18.727479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12225489211083434409 len:43434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.727492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.985 #53 NEW cov: 12286 ft: 14056 corp: 6/187b lim: 105 exec/s: 0 rss: 70Mb L: 46/64 MS: 1 InsertRepeatedBytes- 00:09:00.985 [2024-07-25 15:57:18.767405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446742974245371903 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.767428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.985 #54 NEW cov: 12286 ft: 14202 corp: 7/213b lim: 105 exec/s: 0 rss: 70Mb L: 26/64 MS: 1 CMP- DE: "\000\000\000\000\377\377\377\377"- 00:09:00.985 [2024-07-25 15:57:18.817853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.817878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.985 [2024-07-25 15:57:18.817925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.817937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.985 [2024-07-25 15:57:18.817989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.818003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.985 #55 NEW cov: 12286 ft: 14254 corp: 8/277b lim: 105 exec/s: 0 rss: 71Mb L: 64/64 MS: 1 ChangeByte- 00:09:00.985 [2024-07-25 15:57:18.867949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.867974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.985 [2024-07-25 15:57:18.868042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.868055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.985 [2024-07-25 15:57:18.868110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.868123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.985 #56 NEW cov: 12286 ft: 14295 corp: 9/341b lim: 105 exec/s: 0 rss: 71Mb L: 64/64 MS: 1 CrossOver- 00:09:00.985 [2024-07-25 15:57:18.908055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.908079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.985 [2024-07-25 15:57:18.908144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.908158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.985 [2024-07-25 15:57:18.908211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.908224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.985 #57 NEW cov: 12286 ft: 14362 corp: 10/405b lim: 105 exec/s: 0 rss: 71Mb L: 64/64 MS: 1 ShuffleBytes- 00:09:00.985 [2024-07-25 15:57:18.947931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069462032383 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.985 [2024-07-25 15:57:18.947956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.985 #58 NEW cov: 12286 ft: 14426 corp: 11/431b lim: 105 exec/s: 0 rss: 71Mb L: 26/64 MS: 1 CMP- DE: "\000\000\001\000"- 00:09:01.243 [2024-07-25 15:57:18.988245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:18.988268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.243 [2024-07-25 15:57:18.988319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:18.988332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.243 [2024-07-25 15:57:18.988383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:18.988397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.243 #59 NEW cov: 12286 ft: 14475 corp: 12/495b lim: 105 exec/s: 0 rss: 71Mb L: 64/64 MS: 1 ChangeBinInt- 00:09:01.243 [2024-07-25 15:57:19.038434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.038463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.243 [2024-07-25 15:57:19.038499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.038513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.243 [2024-07-25 15:57:19.038563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.038576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.243 #60 NEW cov: 12286 ft: 14502 corp: 13/560b lim: 105 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 InsertByte- 00:09:01.243 [2024-07-25 15:57:19.078294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.078318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.243 #61 NEW cov: 12286 ft: 14533 corp: 14/585b lim: 105 exec/s: 0 rss: 71Mb L: 25/65 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:09:01.243 [2024-07-25 15:57:19.128832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.128857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.243 [2024-07-25 15:57:19.128909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.128922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.243 [2024-07-25 15:57:19.128990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.129004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.243 [2024-07-25 15:57:19.129057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.129070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.243 #62 NEW cov: 12286 ft: 15047 corp: 15/675b lim: 105 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 CrossOver- 00:09:01.243 [2024-07-25 15:57:19.178868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.243 [2024-07-25 15:57:19.178893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.244 [2024-07-25 15:57:19.178945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.244 [2024-07-25 15:57:19.178958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.244 [2024-07-25 15:57:19.179011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.244 [2024-07-25 15:57:19.179025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.244 #63 NEW cov: 12286 ft: 15056 corp: 16/739b lim: 105 exec/s: 0 rss: 71Mb L: 64/90 MS: 1 ChangeBit- 00:09:01.244 [2024-07-25 15:57:19.228978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.244 [2024-07-25 15:57:19.229002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.244 [2024-07-25 15:57:19.229053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.244 [2024-07-25 15:57:19.229065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.244 [2024-07-25 15:57:19.229117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.244 [2024-07-25 15:57:19.229130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.541 #64 NEW cov: 12286 ft: 15072 corp: 17/804b lim: 105 exec/s: 0 rss: 72Mb L: 65/90 MS: 1 InsertByte- 00:09:01.541 [2024-07-25 15:57:19.278863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069464915777 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.278887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.541 #65 NEW cov: 12286 ft: 15085 corp: 18/829b lim: 105 exec/s: 0 rss: 72Mb L: 25/90 MS: 1 ChangeByte- 00:09:01.541 [2024-07-25 15:57:19.319085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069464915777 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.319109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.319173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.319186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.541 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:01.541 #67 NEW cov: 12309 ft: 15144 corp: 19/880b lim: 105 exec/s: 0 rss: 72Mb L: 51/90 MS: 2 EraseBytes-CrossOver- 00:09:01.541 [2024-07-25 15:57:19.369366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.369391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.369440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.369454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.369507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551395 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.369521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.541 #68 NEW cov: 12309 ft: 15159 corp: 20/944b lim: 105 exec/s: 0 rss: 72Mb L: 64/90 MS: 1 CopyPart- 00:09:01.541 [2024-07-25 15:57:19.409625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.409648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.409700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.409716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.409787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.409801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.409853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.409865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.541 #69 NEW cov: 12309 ft: 15171 corp: 21/1034b lim: 105 exec/s: 69 rss: 72Mb L: 90/90 MS: 1 CMP- DE: "\000\000\000\373"- 00:09:01.541 [2024-07-25 15:57:19.459748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.459777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.459840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.459851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.459903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744069464915967 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.459916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.459969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.459982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.541 #70 NEW cov: 12309 ft: 15191 corp: 22/1125b lim: 105 exec/s: 70 rss: 72Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:09:01.541 [2024-07-25 15:57:19.509882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.509905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.509970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.509980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.510033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744069464915967 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.510046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.541 [2024-07-25 15:57:19.510096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.541 [2024-07-25 15:57:19.510109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.801 #71 NEW cov: 12309 ft: 15203 corp: 23/1216b lim: 105 exec/s: 71 rss: 72Mb L: 91/91 MS: 1 ChangeBit- 00:09:01.801 [2024-07-25 15:57:19.559925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.559948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.801 [2024-07-25 15:57:19.560000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.560013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.801 [2024-07-25 15:57:19.560063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.560075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.801 #72 NEW cov: 12309 ft: 15226 corp: 24/1280b lim: 105 exec/s: 72 rss: 72Mb L: 64/91 MS: 1 PersAutoDict- DE: "\000\000\000\373"- 00:09:01.801 [2024-07-25 15:57:19.600014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.600037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.801 [2024-07-25 15:57:19.600105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18393263828134526975 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.600119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.801 [2024-07-25 15:57:19.600172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.600185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.801 #73 NEW cov: 12309 ft: 15251 corp: 25/1344b lim: 105 exec/s: 73 rss: 72Mb L: 64/91 MS: 1 ChangeByte- 00:09:01.801 [2024-07-25 15:57:19.640316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.640341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.801 [2024-07-25 15:57:19.640411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446742978492891135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.640424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.801 [2024-07-25 15:57:19.640477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.640489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.801 [2024-07-25 15:57:19.640542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.640556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.801 #79 NEW cov: 12309 ft: 15327 corp: 26/1441b lim: 105 exec/s: 79 rss: 72Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:09:01.801 [2024-07-25 15:57:19.680571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.801 [2024-07-25 15:57:19.680595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.802 [2024-07-25 15:57:19.680650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.680663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.802 [2024-07-25 15:57:19.680730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744069464915957 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.680743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.802 [2024-07-25 15:57:19.680797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.680810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.802 #80 NEW cov: 12318 ft: 15363 corp: 27/1532b lim: 105 exec/s: 80 rss: 72Mb L: 91/97 MS: 1 ChangeBinInt- 00:09:01.802 [2024-07-25 15:57:19.720402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.720426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.802 [2024-07-25 15:57:19.720478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414650111 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.720491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.802 [2024-07-25 15:57:19.720546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446743128816746495 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.720558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.802 #81 NEW cov: 12318 ft: 15390 corp: 28/1600b lim: 105 exec/s: 81 rss: 72Mb L: 68/97 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:09:01.802 [2024-07-25 15:57:19.770532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.770555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.802 [2024-07-25 15:57:19.770612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18393263828134526975 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.770624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.802 [2024-07-25 15:57:19.770677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.802 [2024-07-25 15:57:19.770689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.061 #82 NEW cov: 12318 ft: 15439 corp: 29/1664b lim: 105 exec/s: 82 rss: 72Mb L: 64/97 MS: 1 PersAutoDict- DE: "\000\000\001\000"- 00:09:02.061 [2024-07-25 15:57:19.820816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.061 [2024-07-25 15:57:19.820839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.061 [2024-07-25 15:57:19.820903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.061 [2024-07-25 15:57:19.820914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.062 [2024-07-25 15:57:19.820970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073692971007 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.820983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.062 [2024-07-25 15:57:19.821036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.821049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.062 #83 NEW cov: 12318 ft: 15449 corp: 30/1756b lim: 105 exec/s: 83 rss: 72Mb L: 92/97 MS: 1 InsertByte- 00:09:02.062 [2024-07-25 15:57:19.860560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069462032383 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.860583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.062 #84 NEW cov: 12318 ft: 15470 corp: 31/1782b lim: 105 exec/s: 84 rss: 73Mb L: 26/97 MS: 1 ChangeBinInt- 00:09:02.062 [2024-07-25 15:57:19.910697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069462032383 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.910721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.062 #85 NEW cov: 12318 ft: 15473 corp: 32/1808b lim: 105 exec/s: 85 rss: 73Mb L: 26/97 MS: 1 ChangeBinInt- 00:09:02.062 [2024-07-25 15:57:19.961231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.961254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.062 [2024-07-25 15:57:19.961319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446470329673973759 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.961329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.062 [2024-07-25 15:57:19.961379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.961393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.062 [2024-07-25 15:57:19.961448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744069532485631 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:19.961460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.062 #86 NEW cov: 12318 ft: 15487 corp: 33/1911b lim: 105 exec/s: 86 rss: 73Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:09:02.062 [2024-07-25 15:57:20.010984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069462032383 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:20.011012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.062 #87 NEW cov: 12318 ft: 15629 corp: 34/1937b lim: 105 exec/s: 87 rss: 73Mb L: 26/103 MS: 1 ChangeBit- 00:09:02.062 [2024-07-25 15:57:20.051137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:8448 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.062 [2024-07-25 15:57:20.051180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.321 #88 NEW cov: 12318 ft: 15646 corp: 35/1962b lim: 105 exec/s: 88 rss: 73Mb L: 25/103 MS: 1 ChangeByte- 00:09:02.321 [2024-07-25 15:57:20.091344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069462032383 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.091368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.321 [2024-07-25 15:57:20.091435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.091449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.321 #89 NEW cov: 12318 ft: 15663 corp: 36/2011b lim: 105 exec/s: 89 rss: 73Mb L: 49/103 MS: 1 CrossOver- 00:09:02.321 [2024-07-25 15:57:20.131328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.131352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.321 #90 NEW cov: 12318 ft: 15672 corp: 37/2043b lim: 105 exec/s: 90 rss: 73Mb L: 32/103 MS: 1 CrossOver- 00:09:02.321 [2024-07-25 15:57:20.181895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.181920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.321 [2024-07-25 15:57:20.181989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.182004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.321 [2024-07-25 15:57:20.182070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.182083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.321 [2024-07-25 15:57:20.182136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.182149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.321 #91 NEW cov: 12318 ft: 15690 corp: 38/2141b lim: 105 exec/s: 91 rss: 73Mb L: 98/103 MS: 1 CMP- DE: "\377\026\331h\261\350g "- 00:09:02.321 [2024-07-25 15:57:20.231623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069462032383 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.231648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.321 #92 NEW cov: 12318 ft: 15703 corp: 39/2167b lim: 105 exec/s: 92 rss: 74Mb L: 26/103 MS: 1 ShuffleBytes- 00:09:02.321 [2024-07-25 15:57:20.282176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.282200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.321 [2024-07-25 15:57:20.282253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.282264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.321 [2024-07-25 15:57:20.282316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744069464915967 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.282331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.321 [2024-07-25 15:57:20.282383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.321 [2024-07-25 15:57:20.282396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.581 #93 NEW cov: 12318 ft: 15707 corp: 40/2262b lim: 105 exec/s: 93 rss: 74Mb L: 95/103 MS: 1 CopyPart- 00:09:02.581 [2024-07-25 15:57:20.331900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446642914392276991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.581 [2024-07-25 15:57:20.331925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.581 #94 NEW cov: 12318 ft: 15727 corp: 41/2289b lim: 105 exec/s: 94 rss: 74Mb L: 27/103 MS: 1 InsertByte- 00:09:02.581 [2024-07-25 15:57:20.382447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.581 [2024-07-25 15:57:20.382472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.581 [2024-07-25 15:57:20.382523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18393237336776180479 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.581 [2024-07-25 15:57:20.382533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.581 [2024-07-25 15:57:20.382586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.581 [2024-07-25 15:57:20.382600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.581 [2024-07-25 15:57:20.382654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.581 [2024-07-25 15:57:20.382667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.581 #95 NEW cov: 12318 ft: 15746 corp: 42/2393b lim: 105 exec/s: 47 rss: 74Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:09:02.581 #95 DONE cov: 12318 ft: 15746 corp: 42/2393b lim: 105 exec/s: 47 rss: 74Mb 00:09:02.581 ###### Recommended dictionary. ###### 00:09:02.581 "\000\000\000\000\377\377\377\377" # Uses: 0 00:09:02.581 "\000\000\001\000" # Uses: 3 00:09:02.581 "\000\000\000\373" # Uses: 1 00:09:02.581 "\377\026\331h\261\350g " # Uses: 0 00:09:02.581 ###### End of recommended dictionary. ###### 00:09:02.581 Done 95 runs in 2 second(s) 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:02.581 15:57:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:09:02.840 [2024-07-25 15:57:20.577934] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:02.840 [2024-07-25 15:57:20.578006] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172671 ] 00:09:02.840 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.840 [2024-07-25 15:57:20.829189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.099 [2024-07-25 15:57:20.906779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.099 [2024-07-25 15:57:20.965329] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.099 [2024-07-25 15:57:20.981561] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:09:03.099 INFO: Running with entropic power schedule (0xFF, 100). 00:09:03.099 INFO: Seed: 2478839307 00:09:03.099 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:03.099 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:03.099 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:03.099 INFO: A corpus is not provided, starting from an empty corpus 00:09:03.099 #2 INITED exec/s: 0 rss: 64Mb 00:09:03.099 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:03.099 This may also happen if the target rejected all inputs we tried so far 00:09:03.099 [2024-07-25 15:57:21.026207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.099 [2024-07-25 15:57:21.026238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.359 NEW_FUNC[1/701]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:09:03.359 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:03.359 #21 NEW cov: 12102 ft: 12099 corp: 2/32b lim: 120 exec/s: 0 rss: 71Mb L: 31/31 MS: 4 ShuffleBytes-CrossOver-EraseBytes-InsertRepeatedBytes- 00:09:03.359 [2024-07-25 15:57:21.206642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.359 [2024-07-25 15:57:21.206681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.359 [2024-07-25 15:57:21.206729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.359 [2024-07-25 15:57:21.206744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.359 NEW_FUNC[1/1]: 0xf50d00 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:297 00:09:03.359 #22 NEW cov: 12216 ft: 13364 corp: 3/89b lim: 120 exec/s: 0 rss: 71Mb L: 57/57 MS: 1 CopyPart- 00:09:03.359 [2024-07-25 15:57:21.296779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.359 [2024-07-25 15:57:21.296810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.359 #28 NEW cov: 12222 ft: 13755 corp: 4/129b lim: 120 exec/s: 0 rss: 71Mb L: 40/57 MS: 1 CrossOver- 00:09:03.618 [2024-07-25 15:57:21.356985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.357013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.618 [2024-07-25 15:57:21.357044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6747415484965936477 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.357059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.618 #29 NEW cov: 12307 ft: 14023 corp: 5/186b lim: 120 exec/s: 0 rss: 72Mb L: 57/57 MS: 1 ChangeBinInt- 00:09:03.618 [2024-07-25 15:57:21.447305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1374463284409406227 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.447332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.618 [2024-07-25 15:57:21.447362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1374463283923456787 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.447377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.618 [2024-07-25 15:57:21.447404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1374463283923456787 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.447418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.618 [2024-07-25 15:57:21.447444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1374463283923456787 len:4884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.447457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.618 #41 NEW cov: 12307 ft: 14600 corp: 6/296b lim: 120 exec/s: 0 rss: 72Mb L: 110/110 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:03.618 [2024-07-25 15:57:21.517276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.517305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.618 #42 NEW cov: 12307 ft: 14655 corp: 7/337b lim: 120 exec/s: 0 rss: 72Mb L: 41/110 MS: 1 CopyPart- 00:09:03.618 [2024-07-25 15:57:21.607583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.618 [2024-07-25 15:57:21.607613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.877 #43 NEW cov: 12307 ft: 14735 corp: 8/377b lim: 120 exec/s: 0 rss: 72Mb L: 40/110 MS: 1 ChangeByte- 00:09:03.877 [2024-07-25 15:57:21.667718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.877 [2024-07-25 15:57:21.667750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.877 #44 NEW cov: 12307 ft: 14793 corp: 9/408b lim: 120 exec/s: 0 rss: 72Mb L: 31/110 MS: 1 ChangeBinInt- 00:09:03.877 [2024-07-25 15:57:21.727863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.877 [2024-07-25 15:57:21.727892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.877 #45 NEW cov: 12307 ft: 14825 corp: 10/445b lim: 120 exec/s: 0 rss: 72Mb L: 37/110 MS: 1 EraseBytes- 00:09:03.877 [2024-07-25 15:57:21.808059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.877 [2024-07-25 15:57:21.808086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.136 #46 NEW cov: 12307 ft: 14874 corp: 11/482b lim: 120 exec/s: 0 rss: 72Mb L: 37/110 MS: 1 ShuffleBytes- 00:09:04.136 [2024-07-25 15:57:21.888293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.136 [2024-07-25 15:57:21.888320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.136 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:04.136 #47 NEW cov: 12330 ft: 14921 corp: 12/523b lim: 120 exec/s: 0 rss: 72Mb L: 41/110 MS: 1 InsertByte- 00:09:04.136 [2024-07-25 15:57:21.938449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.136 [2024-07-25 15:57:21.938477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.136 [2024-07-25 15:57:21.938523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6747135109500853597 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.136 [2024-07-25 15:57:21.938538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.136 #48 NEW cov: 12330 ft: 14949 corp: 13/580b lim: 120 exec/s: 0 rss: 72Mb L: 57/110 MS: 1 ShuffleBytes- 00:09:04.136 [2024-07-25 15:57:22.028673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.136 [2024-07-25 15:57:22.028701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.136 #49 NEW cov: 12330 ft: 14986 corp: 14/611b lim: 120 exec/s: 49 rss: 72Mb L: 31/110 MS: 1 CrossOver- 00:09:04.136 [2024-07-25 15:57:22.088905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.136 [2024-07-25 15:57:22.088932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.136 [2024-07-25 15:57:22.088962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6747135109500853597 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.136 [2024-07-25 15:57:22.088977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.395 #55 NEW cov: 12330 ft: 15007 corp: 15/668b lim: 120 exec/s: 55 rss: 72Mb L: 57/110 MS: 1 ChangeByte- 00:09:04.395 [2024-07-25 15:57:22.179074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6720317724546653533 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.395 [2024-07-25 15:57:22.179102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.395 #56 NEW cov: 12330 ft: 15027 corp: 16/708b lim: 120 exec/s: 56 rss: 72Mb L: 40/110 MS: 1 ChangeByte- 00:09:04.395 [2024-07-25 15:57:22.229309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6720317724546653533 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.395 [2024-07-25 15:57:22.229335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.395 [2024-07-25 15:57:22.229380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4854138628955004253 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.395 [2024-07-25 15:57:22.229394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.395 [2024-07-25 15:57:22.229423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.396 [2024-07-25 15:57:22.229437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.396 #57 NEW cov: 12330 ft: 15355 corp: 17/788b lim: 120 exec/s: 57 rss: 72Mb L: 80/110 MS: 1 CopyPart- 00:09:04.396 [2024-07-25 15:57:22.319463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073949519197 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.396 [2024-07-25 15:57:22.319490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.396 [2024-07-25 15:57:22.319535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6747135109500853597 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.396 [2024-07-25 15:57:22.319550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.654 #58 NEW cov: 12330 ft: 15372 corp: 18/845b lim: 120 exec/s: 58 rss: 72Mb L: 57/110 MS: 1 ChangeBit- 00:09:04.654 [2024-07-25 15:57:22.409635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.654 [2024-07-25 15:57:22.409662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.654 #59 NEW cov: 12330 ft: 15433 corp: 19/882b lim: 120 exec/s: 59 rss: 72Mb L: 37/110 MS: 1 ShuffleBytes- 00:09:04.654 [2024-07-25 15:57:22.489948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.654 [2024-07-25 15:57:22.489975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.654 [2024-07-25 15:57:22.490005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6747415484965936477 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.654 [2024-07-25 15:57:22.490020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.654 #60 NEW cov: 12330 ft: 15456 corp: 20/939b lim: 120 exec/s: 60 rss: 72Mb L: 57/110 MS: 1 ChangeByte- 00:09:04.654 [2024-07-25 15:57:22.550023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.654 [2024-07-25 15:57:22.550049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.654 #61 NEW cov: 12330 ft: 15510 corp: 21/967b lim: 120 exec/s: 61 rss: 72Mb L: 28/110 MS: 1 EraseBytes- 00:09:04.654 [2024-07-25 15:57:22.600176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.654 [2024-07-25 15:57:22.600203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.913 #62 NEW cov: 12330 ft: 15542 corp: 22/1014b lim: 120 exec/s: 62 rss: 72Mb L: 47/110 MS: 1 InsertRepeatedBytes- 00:09:04.913 [2024-07-25 15:57:22.680344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.913 [2024-07-25 15:57:22.680370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.913 #63 NEW cov: 12330 ft: 15549 corp: 23/1055b lim: 120 exec/s: 63 rss: 72Mb L: 41/110 MS: 1 CrossOver- 00:09:04.913 [2024-07-25 15:57:22.730622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.913 [2024-07-25 15:57:22.730650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.913 [2024-07-25 15:57:22.730680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.913 [2024-07-25 15:57:22.730695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.913 [2024-07-25 15:57:22.730722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.913 [2024-07-25 15:57:22.730736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.913 #64 NEW cov: 12330 ft: 15607 corp: 24/1146b lim: 120 exec/s: 64 rss: 72Mb L: 91/110 MS: 1 CrossOver- 00:09:04.913 [2024-07-25 15:57:22.820737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6712999375152176477 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.913 [2024-07-25 15:57:22.820772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.913 #65 NEW cov: 12330 ft: 15615 corp: 25/1187b lim: 120 exec/s: 65 rss: 72Mb L: 41/110 MS: 1 InsertByte- 00:09:04.913 [2024-07-25 15:57:22.870941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.913 [2024-07-25 15:57:22.870970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.913 [2024-07-25 15:57:22.871001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23972 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.913 [2024-07-25 15:57:22.871015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.173 #66 NEW cov: 12330 ft: 15672 corp: 26/1250b lim: 120 exec/s: 66 rss: 72Mb L: 63/110 MS: 1 CrossOver- 00:09:05.173 [2024-07-25 15:57:22.961125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6712999375152176477 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.173 [2024-07-25 15:57:22.961154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.173 #67 NEW cov: 12330 ft: 15680 corp: 27/1289b lim: 120 exec/s: 33 rss: 72Mb L: 39/110 MS: 1 EraseBytes- 00:09:05.173 #67 DONE cov: 12330 ft: 15680 corp: 27/1289b lim: 120 exec/s: 33 rss: 72Mb 00:09:05.173 Done 67 runs in 2 second(s) 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:05.173 15:57:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:09:05.432 [2024-07-25 15:57:23.187589] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:05.432 [2024-07-25 15:57:23.187667] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173092 ] 00:09:05.432 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.690 [2024-07-25 15:57:23.444150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.690 [2024-07-25 15:57:23.521530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.690 [2024-07-25 15:57:23.579896] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.690 [2024-07-25 15:57:23.596127] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:09:05.690 INFO: Running with entropic power schedule (0xFF, 100). 00:09:05.690 INFO: Seed: 797880354 00:09:05.690 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:05.690 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:05.690 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:05.690 INFO: A corpus is not provided, starting from an empty corpus 00:09:05.690 #2 INITED exec/s: 0 rss: 63Mb 00:09:05.690 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:05.690 This may also happen if the target rejected all inputs we tried so far 00:09:05.690 [2024-07-25 15:57:23.651775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:05.690 [2024-07-25 15:57:23.651808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.690 [2024-07-25 15:57:23.651863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:05.690 [2024-07-25 15:57:23.651878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.690 [2024-07-25 15:57:23.651936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:05.690 [2024-07-25 15:57:23.651950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.690 [2024-07-25 15:57:23.652012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:05.690 [2024-07-25 15:57:23.652028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.949 NEW_FUNC[1/700]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:09:05.949 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:05.949 #7 NEW cov: 12044 ft: 12037 corp: 2/83b lim: 100 exec/s: 0 rss: 70Mb L: 82/82 MS: 5 InsertRepeatedBytes-CMP-EraseBytes-CopyPart-InsertRepeatedBytes- DE: "\006\000\000\000"- 00:09:05.949 [2024-07-25 15:57:23.802089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:05.950 [2024-07-25 15:57:23.802134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.802195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:05.950 [2024-07-25 15:57:23.802213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.802278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:05.950 [2024-07-25 15:57:23.802296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.802357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:05.950 [2024-07-25 15:57:23.802373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.950 #13 NEW cov: 12159 ft: 12588 corp: 3/166b lim: 100 exec/s: 0 rss: 70Mb L: 83/83 MS: 1 InsertByte- 00:09:05.950 [2024-07-25 15:57:23.862094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:05.950 [2024-07-25 15:57:23.862119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.862170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:05.950 [2024-07-25 15:57:23.862182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.862231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:05.950 [2024-07-25 15:57:23.862242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.862289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:05.950 [2024-07-25 15:57:23.862300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.950 #19 NEW cov: 12165 ft: 12945 corp: 4/248b lim: 100 exec/s: 0 rss: 70Mb L: 82/83 MS: 1 ChangeBinInt- 00:09:05.950 [2024-07-25 15:57:23.902211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:05.950 [2024-07-25 15:57:23.902235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.902302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:05.950 [2024-07-25 15:57:23.902314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.902361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:05.950 [2024-07-25 15:57:23.902373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.950 [2024-07-25 15:57:23.902425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:05.950 [2024-07-25 15:57:23.902437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.950 #20 NEW cov: 12250 ft: 13305 corp: 5/334b lim: 100 exec/s: 0 rss: 70Mb L: 86/86 MS: 1 PersAutoDict- DE: "\006\000\000\000"- 00:09:06.209 [2024-07-25 15:57:23.942327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.209 [2024-07-25 15:57:23.942351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:23.942401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.209 [2024-07-25 15:57:23.942413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:23.942461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.209 [2024-07-25 15:57:23.942474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:23.942523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.209 [2024-07-25 15:57:23.942535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.209 #21 NEW cov: 12250 ft: 13443 corp: 6/417b lim: 100 exec/s: 0 rss: 71Mb L: 83/86 MS: 1 PersAutoDict- DE: "\006\000\000\000"- 00:09:06.209 [2024-07-25 15:57:23.992473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.209 [2024-07-25 15:57:23.992496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:23.992565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.209 [2024-07-25 15:57:23.992577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:23.992624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.209 [2024-07-25 15:57:23.992637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:23.992686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.209 [2024-07-25 15:57:23.992698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.209 #22 NEW cov: 12250 ft: 13536 corp: 7/503b lim: 100 exec/s: 0 rss: 71Mb L: 86/86 MS: 1 ChangeByte- 00:09:06.209 [2024-07-25 15:57:24.042577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.209 [2024-07-25 15:57:24.042600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:24.042649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.209 [2024-07-25 15:57:24.042661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:24.042708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.209 [2024-07-25 15:57:24.042720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:24.042772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.209 [2024-07-25 15:57:24.042784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.209 #23 NEW cov: 12250 ft: 13626 corp: 8/585b lim: 100 exec/s: 0 rss: 71Mb L: 82/86 MS: 1 ChangeBit- 00:09:06.209 [2024-07-25 15:57:24.082747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.209 [2024-07-25 15:57:24.082774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:24.082839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.209 [2024-07-25 15:57:24.082849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:24.082898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.209 [2024-07-25 15:57:24.082911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.209 [2024-07-25 15:57:24.082962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.209 [2024-07-25 15:57:24.082973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.210 #24 NEW cov: 12250 ft: 13644 corp: 9/668b lim: 100 exec/s: 0 rss: 71Mb L: 83/86 MS: 1 CMP- DE: "\001\000\000\000\002\214\251)"- 00:09:06.210 [2024-07-25 15:57:24.132861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.210 [2024-07-25 15:57:24.132884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.210 [2024-07-25 15:57:24.132935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.210 [2024-07-25 15:57:24.132947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.210 [2024-07-25 15:57:24.132994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.210 [2024-07-25 15:57:24.133006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.210 [2024-07-25 15:57:24.133055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.210 [2024-07-25 15:57:24.133066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.210 #25 NEW cov: 12250 ft: 13702 corp: 10/755b lim: 100 exec/s: 0 rss: 71Mb L: 87/87 MS: 1 InsertByte- 00:09:06.210 [2024-07-25 15:57:24.172951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.210 [2024-07-25 15:57:24.172974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.210 [2024-07-25 15:57:24.173041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.210 [2024-07-25 15:57:24.173054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.210 [2024-07-25 15:57:24.173104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.210 [2024-07-25 15:57:24.173116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.210 [2024-07-25 15:57:24.173164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.210 [2024-07-25 15:57:24.173175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.469 #26 NEW cov: 12250 ft: 13752 corp: 11/842b lim: 100 exec/s: 0 rss: 71Mb L: 87/87 MS: 1 ChangeBit- 00:09:06.469 [2024-07-25 15:57:24.223136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.469 [2024-07-25 15:57:24.223166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.469 [2024-07-25 15:57:24.223205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.469 [2024-07-25 15:57:24.223219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.469 [2024-07-25 15:57:24.223270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.469 [2024-07-25 15:57:24.223282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.469 [2024-07-25 15:57:24.223331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.469 [2024-07-25 15:57:24.223343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.469 #27 NEW cov: 12250 ft: 13776 corp: 12/932b lim: 100 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\002\214\251)"- 00:09:06.469 [2024-07-25 15:57:24.273289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.469 [2024-07-25 15:57:24.273314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.469 [2024-07-25 15:57:24.273362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.469 [2024-07-25 15:57:24.273373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.469 [2024-07-25 15:57:24.273422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.469 [2024-07-25 15:57:24.273433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.469 [2024-07-25 15:57:24.273482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.470 [2024-07-25 15:57:24.273494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.470 #28 NEW cov: 12250 ft: 13779 corp: 13/1023b lim: 100 exec/s: 0 rss: 71Mb L: 91/91 MS: 1 InsertByte- 00:09:06.470 [2024-07-25 15:57:24.323288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.470 [2024-07-25 15:57:24.323312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.323361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.470 [2024-07-25 15:57:24.323373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.323423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.470 [2024-07-25 15:57:24.323435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.470 #29 NEW cov: 12250 ft: 14084 corp: 14/1097b lim: 100 exec/s: 0 rss: 71Mb L: 74/91 MS: 1 EraseBytes- 00:09:06.470 [2024-07-25 15:57:24.363494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.470 [2024-07-25 15:57:24.363518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.363571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.470 [2024-07-25 15:57:24.363584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.363631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.470 [2024-07-25 15:57:24.363646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.363692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.470 [2024-07-25 15:57:24.363703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.470 #30 NEW cov: 12250 ft: 14090 corp: 15/1183b lim: 100 exec/s: 0 rss: 72Mb L: 86/91 MS: 1 CopyPart- 00:09:06.470 [2024-07-25 15:57:24.413602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.470 [2024-07-25 15:57:24.413628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.413676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.470 [2024-07-25 15:57:24.413687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.413735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.470 [2024-07-25 15:57:24.413746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.413798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.470 [2024-07-25 15:57:24.413809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.470 #36 NEW cov: 12250 ft: 14147 corp: 16/1265b lim: 100 exec/s: 0 rss: 72Mb L: 82/91 MS: 1 ChangeBinInt- 00:09:06.470 [2024-07-25 15:57:24.453634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.470 [2024-07-25 15:57:24.453658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.453720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.470 [2024-07-25 15:57:24.453732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.470 [2024-07-25 15:57:24.453783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.470 [2024-07-25 15:57:24.453796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.729 #37 NEW cov: 12250 ft: 14160 corp: 17/1339b lim: 100 exec/s: 0 rss: 72Mb L: 74/91 MS: 1 ShuffleBytes- 00:09:06.729 [2024-07-25 15:57:24.503891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.729 [2024-07-25 15:57:24.503915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.729 [2024-07-25 15:57:24.503982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.729 [2024-07-25 15:57:24.503994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.729 [2024-07-25 15:57:24.504043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.729 [2024-07-25 15:57:24.504054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.729 [2024-07-25 15:57:24.504102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.729 [2024-07-25 15:57:24.504114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.729 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:06.729 #38 NEW cov: 12273 ft: 14217 corp: 18/1422b lim: 100 exec/s: 0 rss: 72Mb L: 83/91 MS: 1 InsertByte- 00:09:06.729 [2024-07-25 15:57:24.544044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.729 [2024-07-25 15:57:24.544070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.544121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.730 [2024-07-25 15:57:24.544132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.544180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.730 [2024-07-25 15:57:24.544191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.544239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.730 [2024-07-25 15:57:24.544250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.730 #39 NEW cov: 12273 ft: 14263 corp: 19/1508b lim: 100 exec/s: 0 rss: 72Mb L: 86/91 MS: 1 ChangeByte- 00:09:06.730 [2024-07-25 15:57:24.584103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.730 [2024-07-25 15:57:24.584127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.584181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.730 [2024-07-25 15:57:24.584192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.584240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.730 [2024-07-25 15:57:24.584252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.584302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.730 [2024-07-25 15:57:24.584313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.730 #40 NEW cov: 12273 ft: 14310 corp: 20/1603b lim: 100 exec/s: 0 rss: 72Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:09:06.730 [2024-07-25 15:57:24.624219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.730 [2024-07-25 15:57:24.624243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.624290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.730 [2024-07-25 15:57:24.624299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.624347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.730 [2024-07-25 15:57:24.624374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.624426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.730 [2024-07-25 15:57:24.624439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.730 #41 NEW cov: 12273 ft: 14341 corp: 21/1686b lim: 100 exec/s: 41 rss: 72Mb L: 83/95 MS: 1 ShuffleBytes- 00:09:06.730 [2024-07-25 15:57:24.674388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.730 [2024-07-25 15:57:24.674412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.674465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.730 [2024-07-25 15:57:24.674477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.674525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.730 [2024-07-25 15:57:24.674536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.730 [2024-07-25 15:57:24.674585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.730 [2024-07-25 15:57:24.674596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.730 #42 NEW cov: 12273 ft: 14354 corp: 22/1772b lim: 100 exec/s: 42 rss: 72Mb L: 86/95 MS: 1 InsertRepeatedBytes- 00:09:06.989 [2024-07-25 15:57:24.724529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.989 [2024-07-25 15:57:24.724554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.989 [2024-07-25 15:57:24.724629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.989 [2024-07-25 15:57:24.724643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.989 [2024-07-25 15:57:24.724691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.989 [2024-07-25 15:57:24.724701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.989 [2024-07-25 15:57:24.724749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.989 [2024-07-25 15:57:24.724766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.989 #43 NEW cov: 12273 ft: 14377 corp: 23/1859b lim: 100 exec/s: 43 rss: 72Mb L: 87/95 MS: 1 InsertByte- 00:09:06.989 [2024-07-25 15:57:24.774628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.989 [2024-07-25 15:57:24.774652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.989 [2024-07-25 15:57:24.774703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.989 [2024-07-25 15:57:24.774715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.989 [2024-07-25 15:57:24.774768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.989 [2024-07-25 15:57:24.774780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.989 [2024-07-25 15:57:24.774829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.990 [2024-07-25 15:57:24.774840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.990 #44 NEW cov: 12273 ft: 14418 corp: 24/1941b lim: 100 exec/s: 44 rss: 72Mb L: 82/95 MS: 1 ChangeBit- 00:09:06.990 [2024-07-25 15:57:24.814752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.990 [2024-07-25 15:57:24.814780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.814830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.990 [2024-07-25 15:57:24.814841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.814888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.990 [2024-07-25 15:57:24.814902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.814949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.990 [2024-07-25 15:57:24.814959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.990 #45 NEW cov: 12273 ft: 14439 corp: 25/2040b lim: 100 exec/s: 45 rss: 72Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:09:06.990 [2024-07-25 15:57:24.864795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.990 [2024-07-25 15:57:24.864818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.864870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.990 [2024-07-25 15:57:24.864882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.864932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.990 [2024-07-25 15:57:24.864944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.990 #46 NEW cov: 12273 ft: 14441 corp: 26/2105b lim: 100 exec/s: 46 rss: 72Mb L: 65/99 MS: 1 EraseBytes- 00:09:06.990 [2024-07-25 15:57:24.905006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.990 [2024-07-25 15:57:24.905029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.905076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.990 [2024-07-25 15:57:24.905085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.905133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.990 [2024-07-25 15:57:24.905160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.905212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:06.990 [2024-07-25 15:57:24.905224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:06.990 #47 NEW cov: 12273 ft: 14452 corp: 27/2203b lim: 100 exec/s: 47 rss: 72Mb L: 98/99 MS: 1 InsertRepeatedBytes- 00:09:06.990 [2024-07-25 15:57:24.955046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:06.990 [2024-07-25 15:57:24.955069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.955115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:06.990 [2024-07-25 15:57:24.955126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:06.990 [2024-07-25 15:57:24.955176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:06.990 [2024-07-25 15:57:24.955188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.250 #48 NEW cov: 12273 ft: 14459 corp: 28/2274b lim: 100 exec/s: 48 rss: 72Mb L: 71/99 MS: 1 InsertRepeatedBytes- 00:09:07.250 [2024-07-25 15:57:25.005282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.250 [2024-07-25 15:57:25.005306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.005354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.250 [2024-07-25 15:57:25.005365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.005413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.250 [2024-07-25 15:57:25.005425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.005473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.250 [2024-07-25 15:57:25.005484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.250 #49 NEW cov: 12273 ft: 14464 corp: 29/2360b lim: 100 exec/s: 49 rss: 72Mb L: 86/99 MS: 1 ChangeBinInt- 00:09:07.250 [2024-07-25 15:57:25.045416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.250 [2024-07-25 15:57:25.045440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.045490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.250 [2024-07-25 15:57:25.045502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.045548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.250 [2024-07-25 15:57:25.045560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.045608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.250 [2024-07-25 15:57:25.045619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.250 #50 NEW cov: 12273 ft: 14491 corp: 30/2447b lim: 100 exec/s: 50 rss: 72Mb L: 87/99 MS: 1 ChangeBit- 00:09:07.250 [2024-07-25 15:57:25.095425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.250 [2024-07-25 15:57:25.095449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.095500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.250 [2024-07-25 15:57:25.095511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.095559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.250 [2024-07-25 15:57:25.095571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.250 #51 NEW cov: 12273 ft: 14505 corp: 31/2513b lim: 100 exec/s: 51 rss: 72Mb L: 66/99 MS: 1 InsertByte- 00:09:07.250 [2024-07-25 15:57:25.135648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.250 [2024-07-25 15:57:25.135671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.135719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.250 [2024-07-25 15:57:25.135728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.135782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.250 [2024-07-25 15:57:25.135793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.135845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.250 [2024-07-25 15:57:25.135857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.250 #52 NEW cov: 12273 ft: 14521 corp: 32/2601b lim: 100 exec/s: 52 rss: 72Mb L: 88/99 MS: 1 InsertByte- 00:09:07.250 [2024-07-25 15:57:25.175669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.250 [2024-07-25 15:57:25.175692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.175745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.250 [2024-07-25 15:57:25.175756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.175812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.250 [2024-07-25 15:57:25.175840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.250 #53 NEW cov: 12273 ft: 14532 corp: 33/2673b lim: 100 exec/s: 53 rss: 72Mb L: 72/99 MS: 1 InsertByte- 00:09:07.250 [2024-07-25 15:57:25.225782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.250 [2024-07-25 15:57:25.225806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.225857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.250 [2024-07-25 15:57:25.225868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.250 [2024-07-25 15:57:25.225916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.250 [2024-07-25 15:57:25.225928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.510 #54 NEW cov: 12273 ft: 14541 corp: 34/2738b lim: 100 exec/s: 54 rss: 72Mb L: 65/99 MS: 1 ChangeByte- 00:09:07.510 [2024-07-25 15:57:25.266050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.510 [2024-07-25 15:57:25.266073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.266120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.510 [2024-07-25 15:57:25.266129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.266177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.510 [2024-07-25 15:57:25.266189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.266237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.510 [2024-07-25 15:57:25.266249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.510 #55 NEW cov: 12273 ft: 14544 corp: 35/2820b lim: 100 exec/s: 55 rss: 72Mb L: 82/99 MS: 1 ChangeByte- 00:09:07.510 [2024-07-25 15:57:25.306134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.510 [2024-07-25 15:57:25.306157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.306206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.510 [2024-07-25 15:57:25.306217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.306269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.510 [2024-07-25 15:57:25.306282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.306330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.510 [2024-07-25 15:57:25.306342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.510 #56 NEW cov: 12273 ft: 14556 corp: 36/2919b lim: 100 exec/s: 56 rss: 72Mb L: 99/99 MS: 1 ChangeBinInt- 00:09:07.510 [2024-07-25 15:57:25.356300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.510 [2024-07-25 15:57:25.356324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.356376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.510 [2024-07-25 15:57:25.356387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.356435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.510 [2024-07-25 15:57:25.356447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.356495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.510 [2024-07-25 15:57:25.356506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.510 #57 NEW cov: 12273 ft: 14571 corp: 37/3002b lim: 100 exec/s: 57 rss: 72Mb L: 83/99 MS: 1 ShuffleBytes- 00:09:07.510 [2024-07-25 15:57:25.396419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.510 [2024-07-25 15:57:25.396442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.396492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.510 [2024-07-25 15:57:25.396504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.396553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.510 [2024-07-25 15:57:25.396565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.396614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.510 [2024-07-25 15:57:25.396625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.510 #58 NEW cov: 12273 ft: 14581 corp: 38/3095b lim: 100 exec/s: 58 rss: 72Mb L: 93/99 MS: 1 InsertRepeatedBytes- 00:09:07.510 [2024-07-25 15:57:25.436553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.510 [2024-07-25 15:57:25.436576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.510 [2024-07-25 15:57:25.436630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.510 [2024-07-25 15:57:25.436641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.511 [2024-07-25 15:57:25.436689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.511 [2024-07-25 15:57:25.436716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.511 [2024-07-25 15:57:25.436772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.511 [2024-07-25 15:57:25.436784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.511 #59 NEW cov: 12273 ft: 14582 corp: 39/3182b lim: 100 exec/s: 59 rss: 72Mb L: 87/99 MS: 1 ChangeBinInt- 00:09:07.511 [2024-07-25 15:57:25.486419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.511 [2024-07-25 15:57:25.486443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.511 [2024-07-25 15:57:25.486486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.511 [2024-07-25 15:57:25.486496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.770 #60 NEW cov: 12273 ft: 14892 corp: 40/3236b lim: 100 exec/s: 60 rss: 72Mb L: 54/99 MS: 1 EraseBytes- 00:09:07.770 [2024-07-25 15:57:25.526464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.770 [2024-07-25 15:57:25.526488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.770 #61 NEW cov: 12273 ft: 15222 corp: 41/3272b lim: 100 exec/s: 61 rss: 72Mb L: 36/99 MS: 1 EraseBytes- 00:09:07.770 [2024-07-25 15:57:25.566651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.770 [2024-07-25 15:57:25.566677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.770 [2024-07-25 15:57:25.566729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.770 [2024-07-25 15:57:25.566743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.770 #62 NEW cov: 12273 ft: 15228 corp: 42/3326b lim: 100 exec/s: 62 rss: 72Mb L: 54/99 MS: 1 ChangeByte- 00:09:07.770 [2024-07-25 15:57:25.617035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:07.770 [2024-07-25 15:57:25.617059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.770 [2024-07-25 15:57:25.617107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:07.770 [2024-07-25 15:57:25.617116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.770 [2024-07-25 15:57:25.617162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:07.770 [2024-07-25 15:57:25.617173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.770 [2024-07-25 15:57:25.617221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:07.770 [2024-07-25 15:57:25.617232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.770 #63 NEW cov: 12273 ft: 15237 corp: 43/3408b lim: 100 exec/s: 31 rss: 73Mb L: 82/99 MS: 1 ChangeBit- 00:09:07.770 #63 DONE cov: 12273 ft: 15237 corp: 43/3408b lim: 100 exec/s: 31 rss: 73Mb 00:09:07.770 ###### Recommended dictionary. ###### 00:09:07.770 "\006\000\000\000" # Uses: 3 00:09:07.770 "\001\000\000\000\002\214\251)" # Uses: 1 00:09:07.770 ###### End of recommended dictionary. ###### 00:09:07.770 Done 63 runs in 2 second(s) 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:08.030 15:57:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:09:08.030 [2024-07-25 15:57:25.815995] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:08.030 [2024-07-25 15:57:25.816075] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173488 ] 00:09:08.030 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.030 [2024-07-25 15:57:25.990339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.289 [2024-07-25 15:57:26.055479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.289 [2024-07-25 15:57:26.114212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.289 [2024-07-25 15:57:26.130444] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:09:08.289 INFO: Running with entropic power schedule (0xFF, 100). 00:09:08.289 INFO: Seed: 3332919281 00:09:08.289 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:08.289 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:08.289 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:08.289 INFO: A corpus is not provided, starting from an empty corpus 00:09:08.289 #2 INITED exec/s: 0 rss: 64Mb 00:09:08.289 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:08.289 This may also happen if the target rejected all inputs we tried so far 00:09:08.289 [2024-07-25 15:57:26.186129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:08.289 [2024-07-25 15:57:26.186163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.289 [2024-07-25 15:57:26.186210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:08.290 [2024-07-25 15:57:26.186227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.290 [2024-07-25 15:57:26.186284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:08.290 [2024-07-25 15:57:26.186306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.290 [2024-07-25 15:57:26.186362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:08.290 [2024-07-25 15:57:26.186379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.290 [2024-07-25 15:57:26.186434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:09:08.290 [2024-07-25 15:57:26.186450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:08.550 NEW_FUNC[1/700]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:09:08.550 NEW_FUNC[2/700]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:08.550 #4 NEW cov: 12024 ft: 12023 corp: 2/51b lim: 50 exec/s: 0 rss: 71Mb L: 50/50 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:08.550 [2024-07-25 15:57:26.347349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6799976247815593566 len:24075 00:09:08.550 [2024-07-25 15:57:26.347390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.550 #7 NEW cov: 12137 ft: 12954 corp: 3/61b lim: 50 exec/s: 0 rss: 71Mb L: 10/50 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:09:08.550 [2024-07-25 15:57:26.398380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 00:09:08.550 [2024-07-25 15:57:26.398409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.398510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4774451407313060418 len:16963 00:09:08.550 [2024-07-25 15:57:26.398538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.398628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4774451407313060418 len:16963 00:09:08.550 [2024-07-25 15:57:26.398640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.398725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:4774451407313060418 len:16963 00:09:08.550 [2024-07-25 15:57:26.398740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.550 #8 NEW cov: 12143 ft: 13232 corp: 4/104b lim: 50 exec/s: 0 rss: 71Mb L: 43/50 MS: 1 InsertRepeatedBytes- 00:09:08.550 [2024-07-25 15:57:26.448625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:08.550 [2024-07-25 15:57:26.448653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.448739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:08.550 [2024-07-25 15:57:26.448756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.448850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:08.550 [2024-07-25 15:57:26.448868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.448954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:08.550 [2024-07-25 15:57:26.448974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.550 #9 NEW cov: 12228 ft: 13563 corp: 5/144b lim: 50 exec/s: 0 rss: 71Mb L: 40/50 MS: 1 EraseBytes- 00:09:08.550 [2024-07-25 15:57:26.508755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:08.550 [2024-07-25 15:57:26.508785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.508885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:08.550 [2024-07-25 15:57:26.508902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.508995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:08.550 [2024-07-25 15:57:26.509009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.550 [2024-07-25 15:57:26.509086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:08.550 [2024-07-25 15:57:26.509102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.809 #10 NEW cov: 12228 ft: 13612 corp: 6/184b lim: 50 exec/s: 0 rss: 71Mb L: 40/50 MS: 1 CopyPart- 00:09:08.809 [2024-07-25 15:57:26.568316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6799976247815593562 len:24075 00:09:08.809 [2024-07-25 15:57:26.568343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.809 #11 NEW cov: 12228 ft: 13689 corp: 7/194b lim: 50 exec/s: 0 rss: 72Mb L: 10/50 MS: 1 ChangeBit- 00:09:08.809 [2024-07-25 15:57:26.629228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:08.809 [2024-07-25 15:57:26.629256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.809 [2024-07-25 15:57:26.629349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:08.809 [2024-07-25 15:57:26.629367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.809 [2024-07-25 15:57:26.629464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:08.809 [2024-07-25 15:57:26.629479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.809 [2024-07-25 15:57:26.629571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:134140418588672 len:1 00:09:08.809 [2024-07-25 15:57:26.629588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.809 #12 NEW cov: 12228 ft: 13768 corp: 8/235b lim: 50 exec/s: 0 rss: 72Mb L: 41/50 MS: 1 InsertByte- 00:09:08.809 [2024-07-25 15:57:26.698742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11646767826850324901 len:38923 00:09:08.809 [2024-07-25 15:57:26.698771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.809 #18 NEW cov: 12228 ft: 13847 corp: 9/245b lim: 50 exec/s: 0 rss: 72Mb L: 10/50 MS: 1 ChangeBinInt- 00:09:08.809 [2024-07-25 15:57:26.759773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:08.809 [2024-07-25 15:57:26.759815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.809 [2024-07-25 15:57:26.759885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:08.809 [2024-07-25 15:57:26.759903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.809 [2024-07-25 15:57:26.759990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:08.809 [2024-07-25 15:57:26.760005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.809 [2024-07-25 15:57:26.760092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:08.809 [2024-07-25 15:57:26.760107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.809 #24 NEW cov: 12228 ft: 13881 corp: 10/286b lim: 50 exec/s: 0 rss: 72Mb L: 41/50 MS: 1 CopyPart- 00:09:09.069 [2024-07-25 15:57:26.830045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:09:09.069 [2024-07-25 15:57:26.830072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.830185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:09.069 [2024-07-25 15:57:26.830200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.830255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:09.069 [2024-07-25 15:57:26.830273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.830325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:09.069 [2024-07-25 15:57:26.830339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.069 #25 NEW cov: 12228 ft: 14014 corp: 11/330b lim: 50 exec/s: 0 rss: 72Mb L: 44/50 MS: 1 InsertRepeatedBytes- 00:09:09.069 [2024-07-25 15:57:26.880209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:09:09.069 [2024-07-25 15:57:26.880235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.880342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:09.069 [2024-07-25 15:57:26.880361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.880402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:09.069 [2024-07-25 15:57:26.880417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.880486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12666373951913984 len:65536 00:09:09.069 [2024-07-25 15:57:26.880503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.069 #26 NEW cov: 12228 ft: 14054 corp: 12/374b lim: 50 exec/s: 0 rss: 72Mb L: 44/50 MS: 1 ChangeBinInt- 00:09:09.069 [2024-07-25 15:57:26.940485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:09:09.069 [2024-07-25 15:57:26.940512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.940604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:09.069 [2024-07-25 15:57:26.940623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.940712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:09.069 [2024-07-25 15:57:26.940723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:26.940812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446742978492891135 len:1 00:09:09.069 [2024-07-25 15:57:26.940827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.069 #27 NEW cov: 12228 ft: 14071 corp: 13/423b lim: 50 exec/s: 0 rss: 72Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:09:09.069 [2024-07-25 15:57:27.000830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.069 [2024-07-25 15:57:27.000857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:27.000952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:09.069 [2024-07-25 15:57:27.000970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:27.001052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:2049 00:09:09.069 [2024-07-25 15:57:27.001063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:27.001148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:134140418588672 len:1 00:09:09.069 [2024-07-25 15:57:27.001161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.069 #28 NEW cov: 12228 ft: 14125 corp: 14/464b lim: 50 exec/s: 0 rss: 72Mb L: 41/50 MS: 1 ChangeBinInt- 00:09:09.069 [2024-07-25 15:57:27.051137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.069 [2024-07-25 15:57:27.051164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.069 [2024-07-25 15:57:27.051257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:09.070 [2024-07-25 15:57:27.051274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.070 [2024-07-25 15:57:27.051366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.070 [2024-07-25 15:57:27.051379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.070 [2024-07-25 15:57:27.051461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1043677052928 len:1 00:09:09.070 [2024-07-25 15:57:27.051476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.329 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:09.329 #29 NEW cov: 12251 ft: 14166 corp: 15/509b lim: 50 exec/s: 0 rss: 72Mb L: 45/50 MS: 1 CMP- DE: "\363\000\000\000"- 00:09:09.329 [2024-07-25 15:57:27.121305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.329 [2024-07-25 15:57:27.121337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.121424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:09.329 [2024-07-25 15:57:27.121441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.121529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.329 [2024-07-25 15:57:27.121546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.121626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.329 [2024-07-25 15:57:27.121643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.329 #30 NEW cov: 12251 ft: 14197 corp: 16/550b lim: 50 exec/s: 0 rss: 72Mb L: 41/50 MS: 1 ShuffleBytes- 00:09:09.329 [2024-07-25 15:57:27.171464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.329 [2024-07-25 15:57:27.171492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.171572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:87960930222080 len:1 00:09:09.329 [2024-07-25 15:57:27.171586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.171678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.329 [2024-07-25 15:57:27.171694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.171783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.329 [2024-07-25 15:57:27.171798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.329 #31 NEW cov: 12251 ft: 14210 corp: 17/591b lim: 50 exec/s: 31 rss: 72Mb L: 41/50 MS: 1 InsertByte- 00:09:09.329 [2024-07-25 15:57:27.220853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11647893726753300956 len:41378 00:09:09.329 [2024-07-25 15:57:27.220880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.329 #32 NEW cov: 12251 ft: 14255 corp: 18/603b lim: 50 exec/s: 32 rss: 72Mb L: 12/50 MS: 1 CopyPart- 00:09:09.329 [2024-07-25 15:57:27.281784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.329 [2024-07-25 15:57:27.281811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.281914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:09.329 [2024-07-25 15:57:27.281933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.281980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:2049 00:09:09.329 [2024-07-25 15:57:27.281994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.329 [2024-07-25 15:57:27.282060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11169061216297418752 len:1 00:09:09.329 [2024-07-25 15:57:27.282078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.329 #33 NEW cov: 12251 ft: 14260 corp: 19/644b lim: 50 exec/s: 33 rss: 72Mb L: 41/50 MS: 1 ChangeByte- 00:09:09.590 [2024-07-25 15:57:27.342135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4774451407313060418 len:16963 00:09:09.590 [2024-07-25 15:57:27.342165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.342251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4774451407313061442 len:16963 00:09:09.590 [2024-07-25 15:57:27.342271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.342351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4774451407313060418 len:16963 00:09:09.590 [2024-07-25 15:57:27.342367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.342457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:4774451407313060418 len:16963 00:09:09.590 [2024-07-25 15:57:27.342474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.590 #34 NEW cov: 12251 ft: 14278 corp: 20/687b lim: 50 exec/s: 34 rss: 72Mb L: 43/50 MS: 1 ChangeBit- 00:09:09.590 [2024-07-25 15:57:27.402359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.590 [2024-07-25 15:57:27.402384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.402464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:87960930222080 len:1 00:09:09.590 [2024-07-25 15:57:27.402479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.402562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:244 00:09:09.590 [2024-07-25 15:57:27.402575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.402657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.402672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.590 #35 NEW cov: 12251 ft: 14308 corp: 21/728b lim: 50 exec/s: 35 rss: 72Mb L: 41/50 MS: 1 PersAutoDict- DE: "\363\000\000\000"- 00:09:09.590 [2024-07-25 15:57:27.462477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.590 [2024-07-25 15:57:27.462504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.462608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17509995351216488448 len:1 00:09:09.590 [2024-07-25 15:57:27.462624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.462711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.462724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.462820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.462840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.590 #36 NEW cov: 12251 ft: 14352 corp: 22/769b lim: 50 exec/s: 36 rss: 73Mb L: 41/50 MS: 1 PersAutoDict- DE: "\363\000\000\000"- 00:09:09.590 [2024-07-25 15:57:27.522703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.590 [2024-07-25 15:57:27.522730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.522830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:87960930222080 len:1 00:09:09.590 [2024-07-25 15:57:27.522845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.522934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:45079976738816 len:1 00:09:09.590 [2024-07-25 15:57:27.522946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.523032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.523050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.590 #37 NEW cov: 12251 ft: 14362 corp: 23/811b lim: 50 exec/s: 37 rss: 73Mb L: 42/50 MS: 1 InsertByte- 00:09:09.590 [2024-07-25 15:57:27.573140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.590 [2024-07-25 15:57:27.573169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.573258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.573274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.573340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.573355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.573415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.573433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.590 [2024-07-25 15:57:27.573512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:09:09.590 [2024-07-25 15:57:27.573526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:09.850 #38 NEW cov: 12251 ft: 14365 corp: 24/861b lim: 50 exec/s: 38 rss: 73Mb L: 50/50 MS: 1 ShuffleBytes- 00:09:09.850 [2024-07-25 15:57:27.622700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.850 [2024-07-25 15:57:27.622725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.622804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:09.850 [2024-07-25 15:57:27.622822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.622906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.850 [2024-07-25 15:57:27.622924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.850 #39 NEW cov: 12251 ft: 14599 corp: 25/892b lim: 50 exec/s: 39 rss: 73Mb L: 31/50 MS: 1 EraseBytes- 00:09:09.850 [2024-07-25 15:57:27.683556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:09.850 [2024-07-25 15:57:27.683582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.683695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:87960930222080 len:42 00:09:09.850 [2024-07-25 15:57:27.683715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.683810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.850 [2024-07-25 15:57:27.683825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.683905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.850 [2024-07-25 15:57:27.683918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.850 #40 NEW cov: 12251 ft: 14615 corp: 26/933b lim: 50 exec/s: 40 rss: 73Mb L: 41/50 MS: 1 ChangeBinInt- 00:09:09.850 [2024-07-25 15:57:27.733386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16449536 len:1 00:09:09.850 [2024-07-25 15:57:27.733414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.733513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:09.850 [2024-07-25 15:57:27.733527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.850 #43 NEW cov: 12251 ft: 14834 corp: 27/959b lim: 50 exec/s: 43 rss: 73Mb L: 26/50 MS: 3 ShuffleBytes-CrossOver-CrossOver- 00:09:09.850 [2024-07-25 15:57:27.784145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:68398423551770624 len:1 00:09:09.850 [2024-07-25 15:57:27.784174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.784252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17509995351216488448 len:1 00:09:09.850 [2024-07-25 15:57:27.784269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.784348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:09.850 [2024-07-25 15:57:27.784363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.850 [2024-07-25 15:57:27.784455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:09.850 [2024-07-25 15:57:27.784474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:09.850 #44 NEW cov: 12251 ft: 14848 corp: 28/1000b lim: 50 exec/s: 44 rss: 73Mb L: 41/50 MS: 1 PersAutoDict- DE: "\363\000\000\000"- 00:09:10.109 [2024-07-25 15:57:27.853657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6841812261525544542 len:95 00:09:10.109 [2024-07-25 15:57:27.853687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.109 #45 NEW cov: 12251 ft: 14894 corp: 29/1014b lim: 50 exec/s: 45 rss: 73Mb L: 14/50 MS: 1 PersAutoDict- DE: "\363\000\000\000"- 00:09:10.109 [2024-07-25 15:57:27.904765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:10.109 [2024-07-25 15:57:27.904791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.109 [2024-07-25 15:57:27.904903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:10.109 [2024-07-25 15:57:27.904920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.109 [2024-07-25 15:57:27.905010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:10.109 [2024-07-25 15:57:27.905022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.109 [2024-07-25 15:57:27.905112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2738188573441261568 len:1 00:09:10.109 [2024-07-25 15:57:27.905125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.109 #46 NEW cov: 12251 ft: 14914 corp: 30/1054b lim: 50 exec/s: 46 rss: 73Mb L: 40/50 MS: 1 ChangeByte- 00:09:10.109 [2024-07-25 15:57:27.954424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11647893726753300956 len:41378 00:09:10.110 [2024-07-25 15:57:27.954448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.110 #47 NEW cov: 12251 ft: 14925 corp: 31/1066b lim: 50 exec/s: 47 rss: 73Mb L: 12/50 MS: 1 ShuffleBytes- 00:09:10.110 [2024-07-25 15:57:28.014550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:406346555123 len:24075 00:09:10.110 [2024-07-25 15:57:28.014575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.110 #48 NEW cov: 12251 ft: 14941 corp: 32/1076b lim: 50 exec/s: 48 rss: 73Mb L: 10/50 MS: 1 PersAutoDict- DE: "\363\000\000\000"- 00:09:10.110 [2024-07-25 15:57:28.065873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:09:10.110 [2024-07-25 15:57:28.065899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.110 [2024-07-25 15:57:28.066009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:10.110 [2024-07-25 15:57:28.066025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.110 [2024-07-25 15:57:28.066112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:10.110 [2024-07-25 15:57:28.066124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.110 [2024-07-25 15:57:28.066209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:10.110 [2024-07-25 15:57:28.066224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.110 #49 NEW cov: 12251 ft: 14956 corp: 33/1120b lim: 50 exec/s: 49 rss: 73Mb L: 44/50 MS: 1 CopyPart- 00:09:10.370 [2024-07-25 15:57:28.115566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:10.370 [2024-07-25 15:57:28.115592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.370 [2024-07-25 15:57:28.115659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:123 00:09:10.370 [2024-07-25 15:57:28.115677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.370 #50 NEW cov: 12251 ft: 14973 corp: 34/1144b lim: 50 exec/s: 50 rss: 73Mb L: 24/50 MS: 1 EraseBytes- 00:09:10.370 [2024-07-25 15:57:28.165990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4211081216 len:1 00:09:10.370 [2024-07-25 15:57:28.166015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.370 [2024-07-25 15:57:28.166100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294967296 len:1 00:09:10.370 [2024-07-25 15:57:28.166116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.370 [2024-07-25 15:57:28.166202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:10.370 [2024-07-25 15:57:28.166218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.370 #51 NEW cov: 12251 ft: 14986 corp: 35/1175b lim: 50 exec/s: 25 rss: 73Mb L: 31/50 MS: 1 ChangeBit- 00:09:10.370 #51 DONE cov: 12251 ft: 14986 corp: 35/1175b lim: 50 exec/s: 25 rss: 73Mb 00:09:10.370 ###### Recommended dictionary. ###### 00:09:10.370 "\363\000\000\000" # Uses: 5 00:09:10.370 ###### End of recommended dictionary. ###### 00:09:10.370 Done 51 runs in 2 second(s) 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:10.370 15:57:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:09:10.630 [2024-07-25 15:57:28.361752] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:10.630 [2024-07-25 15:57:28.361817] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173834 ] 00:09:10.630 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.630 [2024-07-25 15:57:28.533688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.630 [2024-07-25 15:57:28.598403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.889 [2024-07-25 15:57:28.657259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.889 [2024-07-25 15:57:28.673484] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:10.889 INFO: Running with entropic power schedule (0xFF, 100). 00:09:10.889 INFO: Seed: 1580917220 00:09:10.889 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:10.889 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:10.889 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:10.889 INFO: A corpus is not provided, starting from an empty corpus 00:09:10.889 #2 INITED exec/s: 0 rss: 64Mb 00:09:10.889 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:10.889 This may also happen if the target rejected all inputs we tried so far 00:09:10.890 [2024-07-25 15:57:28.729202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:10.890 [2024-07-25 15:57:28.729233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.890 [2024-07-25 15:57:28.729285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:10.890 [2024-07-25 15:57:28.729300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.890 [2024-07-25 15:57:28.729350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:10.890 [2024-07-25 15:57:28.729365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.890 [2024-07-25 15:57:28.729418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:10.890 [2024-07-25 15:57:28.729432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.890 NEW_FUNC[1/702]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:09:10.890 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:10.890 #22 NEW cov: 12082 ft: 12080 corp: 2/89b lim: 90 exec/s: 0 rss: 71Mb L: 88/88 MS: 5 CrossOver-ChangeByte-CMP-ChangeBit-InsertRepeatedBytes- DE: "\000\000\000E"- 00:09:11.149 [2024-07-25 15:57:28.880236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.149 [2024-07-25 15:57:28.880289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.149 [2024-07-25 15:57:28.880374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.149 [2024-07-25 15:57:28.880399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.149 [2024-07-25 15:57:28.880479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.149 [2024-07-25 15:57:28.880503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.149 [2024-07-25 15:57:28.880584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.149 [2024-07-25 15:57:28.880607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.149 [2024-07-25 15:57:28.880686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:11.149 [2024-07-25 15:57:28.880716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:11.149 #28 NEW cov: 12195 ft: 12637 corp: 3/179b lim: 90 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 CopyPart- 00:09:11.149 [2024-07-25 15:57:28.939564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.149 [2024-07-25 15:57:28.939591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.149 [2024-07-25 15:57:28.939639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.149 [2024-07-25 15:57:28.939652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.149 [2024-07-25 15:57:28.939709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.149 [2024-07-25 15:57:28.939722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.149 #31 NEW cov: 12201 ft: 13313 corp: 4/238b lim: 90 exec/s: 0 rss: 71Mb L: 59/90 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:09:11.149 [2024-07-25 15:57:28.979697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.149 [2024-07-25 15:57:28.979722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.149 [2024-07-25 15:57:28.979767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.150 [2024-07-25 15:57:28.979780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.150 [2024-07-25 15:57:28.979834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.150 [2024-07-25 15:57:28.979846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.150 #32 NEW cov: 12286 ft: 13582 corp: 5/297b lim: 90 exec/s: 0 rss: 71Mb L: 59/90 MS: 1 ShuffleBytes- 00:09:11.150 [2024-07-25 15:57:29.030160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.150 [2024-07-25 15:57:29.030184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.150 [2024-07-25 15:57:29.030253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.150 [2024-07-25 15:57:29.030266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.150 [2024-07-25 15:57:29.030320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.150 [2024-07-25 15:57:29.030333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.150 [2024-07-25 15:57:29.030389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.150 [2024-07-25 15:57:29.030402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.150 [2024-07-25 15:57:29.030457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:11.150 [2024-07-25 15:57:29.030470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:11.150 #33 NEW cov: 12286 ft: 13667 corp: 6/387b lim: 90 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 CopyPart- 00:09:11.150 [2024-07-25 15:57:29.069800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.150 [2024-07-25 15:57:29.069826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.150 [2024-07-25 15:57:29.069865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.150 [2024-07-25 15:57:29.069878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.150 #39 NEW cov: 12286 ft: 14138 corp: 7/432b lim: 90 exec/s: 0 rss: 72Mb L: 45/90 MS: 1 EraseBytes- 00:09:11.150 [2024-07-25 15:57:29.119965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.150 [2024-07-25 15:57:29.119990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.150 [2024-07-25 15:57:29.120032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.150 [2024-07-25 15:57:29.120044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.409 #40 NEW cov: 12286 ft: 14194 corp: 8/473b lim: 90 exec/s: 0 rss: 72Mb L: 41/90 MS: 1 EraseBytes- 00:09:11.409 [2024-07-25 15:57:29.170049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.409 [2024-07-25 15:57:29.170074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.170133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.409 [2024-07-25 15:57:29.170146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.409 #42 NEW cov: 12286 ft: 14278 corp: 9/512b lim: 90 exec/s: 0 rss: 72Mb L: 39/90 MS: 2 CopyPart-CrossOver- 00:09:11.409 [2024-07-25 15:57:29.210684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.409 [2024-07-25 15:57:29.210708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.210763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.409 [2024-07-25 15:57:29.210774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.210846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.409 [2024-07-25 15:57:29.210858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.210913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.409 [2024-07-25 15:57:29.210926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.210982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:11.409 [2024-07-25 15:57:29.210995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:11.409 #43 NEW cov: 12286 ft: 14371 corp: 10/602b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeBinInt- 00:09:11.409 [2024-07-25 15:57:29.250610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.409 [2024-07-25 15:57:29.250634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.250704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.409 [2024-07-25 15:57:29.250718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.250777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.409 [2024-07-25 15:57:29.250793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.250849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.409 [2024-07-25 15:57:29.250862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.409 #44 NEW cov: 12286 ft: 14480 corp: 11/691b lim: 90 exec/s: 0 rss: 72Mb L: 89/90 MS: 1 InsertByte- 00:09:11.409 [2024-07-25 15:57:29.290391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.409 [2024-07-25 15:57:29.290416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.290475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.409 [2024-07-25 15:57:29.290488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.409 #45 NEW cov: 12286 ft: 14523 corp: 12/736b lim: 90 exec/s: 0 rss: 72Mb L: 45/90 MS: 1 ChangeBit- 00:09:11.409 [2024-07-25 15:57:29.330498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.409 [2024-07-25 15:57:29.330523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.330564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.409 [2024-07-25 15:57:29.330577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.409 #46 NEW cov: 12286 ft: 14555 corp: 13/781b lim: 90 exec/s: 0 rss: 72Mb L: 45/90 MS: 1 ChangeByte- 00:09:11.409 [2024-07-25 15:57:29.371144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.409 [2024-07-25 15:57:29.371168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.371236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.409 [2024-07-25 15:57:29.371247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.371300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.409 [2024-07-25 15:57:29.371313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.371369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.409 [2024-07-25 15:57:29.371382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.409 [2024-07-25 15:57:29.371438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:11.409 [2024-07-25 15:57:29.371451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:11.409 #47 NEW cov: 12286 ft: 14618 corp: 14/871b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CrossOver- 00:09:11.668 [2024-07-25 15:57:29.410959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.668 [2024-07-25 15:57:29.410985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.668 [2024-07-25 15:57:29.411044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.668 [2024-07-25 15:57:29.411057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.411118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.669 [2024-07-25 15:57:29.411132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.669 #48 NEW cov: 12286 ft: 14667 corp: 15/930b lim: 90 exec/s: 0 rss: 72Mb L: 59/90 MS: 1 ShuffleBytes- 00:09:11.669 [2024-07-25 15:57:29.461420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.669 [2024-07-25 15:57:29.461444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.461504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.669 [2024-07-25 15:57:29.461517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.461589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.669 [2024-07-25 15:57:29.461603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.461660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.669 [2024-07-25 15:57:29.461673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.461728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:11.669 [2024-07-25 15:57:29.461742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:11.669 #49 NEW cov: 12286 ft: 14688 corp: 16/1020b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 InsertByte- 00:09:11.669 [2024-07-25 15:57:29.511359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.669 [2024-07-25 15:57:29.511383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.511454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.669 [2024-07-25 15:57:29.511466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.511521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.669 [2024-07-25 15:57:29.511533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.511588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.669 [2024-07-25 15:57:29.511602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.669 #50 NEW cov: 12286 ft: 14695 corp: 17/1109b lim: 90 exec/s: 0 rss: 72Mb L: 89/90 MS: 1 InsertByte- 00:09:11.669 [2024-07-25 15:57:29.551299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.669 [2024-07-25 15:57:29.551323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.551395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.669 [2024-07-25 15:57:29.551409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.551466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.669 [2024-07-25 15:57:29.551482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.669 #51 NEW cov: 12286 ft: 14740 corp: 18/1168b lim: 90 exec/s: 0 rss: 72Mb L: 59/90 MS: 1 ChangeByte- 00:09:11.669 [2024-07-25 15:57:29.601296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.669 [2024-07-25 15:57:29.601321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.669 [2024-07-25 15:57:29.601361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.669 [2024-07-25 15:57:29.601374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.669 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:11.669 #52 NEW cov: 12303 ft: 14769 corp: 19/1213b lim: 90 exec/s: 0 rss: 72Mb L: 45/90 MS: 1 CrossOver- 00:09:11.929 [2024-07-25 15:57:29.661634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.929 [2024-07-25 15:57:29.661662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.661712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.929 [2024-07-25 15:57:29.661726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.661785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.929 [2024-07-25 15:57:29.661799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.929 #53 NEW cov: 12303 ft: 14792 corp: 20/1272b lim: 90 exec/s: 0 rss: 72Mb L: 59/90 MS: 1 ShuffleBytes- 00:09:11.929 [2024-07-25 15:57:29.701889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.929 [2024-07-25 15:57:29.701915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.701988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.929 [2024-07-25 15:57:29.702002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.702057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.929 [2024-07-25 15:57:29.702070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.702127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.929 [2024-07-25 15:57:29.702141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.929 #54 NEW cov: 12303 ft: 14801 corp: 21/1361b lim: 90 exec/s: 54 rss: 72Mb L: 89/90 MS: 1 InsertByte- 00:09:11.929 [2024-07-25 15:57:29.741662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.929 [2024-07-25 15:57:29.741687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.741729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.929 [2024-07-25 15:57:29.741742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.929 #55 NEW cov: 12303 ft: 14816 corp: 22/1406b lim: 90 exec/s: 55 rss: 72Mb L: 45/90 MS: 1 ChangeBit- 00:09:11.929 [2024-07-25 15:57:29.781811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.929 [2024-07-25 15:57:29.781840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.781881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.929 [2024-07-25 15:57:29.781894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.929 #56 NEW cov: 12303 ft: 14856 corp: 23/1451b lim: 90 exec/s: 56 rss: 72Mb L: 45/90 MS: 1 CMP- DE: "\016\000\000\000\000\000\000\000"- 00:09:11.929 [2024-07-25 15:57:29.832474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.929 [2024-07-25 15:57:29.832499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.929 [2024-07-25 15:57:29.832571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.930 [2024-07-25 15:57:29.832586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.930 [2024-07-25 15:57:29.832644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.930 [2024-07-25 15:57:29.832658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.930 [2024-07-25 15:57:29.832715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:11.930 [2024-07-25 15:57:29.832729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.930 [2024-07-25 15:57:29.832790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:11.930 [2024-07-25 15:57:29.832804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:11.930 #57 NEW cov: 12303 ft: 14869 corp: 24/1541b lim: 90 exec/s: 57 rss: 73Mb L: 90/90 MS: 1 ChangeByte- 00:09:11.930 [2024-07-25 15:57:29.882231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:11.930 [2024-07-25 15:57:29.882257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.930 [2024-07-25 15:57:29.882301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:11.930 [2024-07-25 15:57:29.882314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.930 [2024-07-25 15:57:29.882370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:11.930 [2024-07-25 15:57:29.882384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.930 #58 NEW cov: 12303 ft: 14883 corp: 25/1600b lim: 90 exec/s: 58 rss: 73Mb L: 59/90 MS: 1 ChangeBit- 00:09:12.189 [2024-07-25 15:57:29.922179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.189 [2024-07-25 15:57:29.922204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:29.922253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.189 [2024-07-25 15:57:29.922266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.189 #59 NEW cov: 12303 ft: 14955 corp: 26/1645b lim: 90 exec/s: 59 rss: 73Mb L: 45/90 MS: 1 ChangeBinInt- 00:09:12.189 [2024-07-25 15:57:29.962500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.189 [2024-07-25 15:57:29.962526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:29.962585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.189 [2024-07-25 15:57:29.962598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:29.962657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.189 [2024-07-25 15:57:29.962670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.189 #60 NEW cov: 12303 ft: 14962 corp: 27/1712b lim: 90 exec/s: 60 rss: 73Mb L: 67/90 MS: 1 InsertRepeatedBytes- 00:09:12.189 [2024-07-25 15:57:30.012741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.189 [2024-07-25 15:57:30.012775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.012839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.189 [2024-07-25 15:57:30.012855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.012919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.189 [2024-07-25 15:57:30.012934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.189 #61 NEW cov: 12303 ft: 14970 corp: 28/1773b lim: 90 exec/s: 61 rss: 73Mb L: 61/90 MS: 1 CopyPart- 00:09:12.189 [2024-07-25 15:57:30.085343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.189 [2024-07-25 15:57:30.085382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.085498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.189 [2024-07-25 15:57:30.085511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.085622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.189 [2024-07-25 15:57:30.085641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.085763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:12.189 [2024-07-25 15:57:30.085783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.085882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:12.189 [2024-07-25 15:57:30.085902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.189 #62 NEW cov: 12303 ft: 15040 corp: 29/1863b lim: 90 exec/s: 62 rss: 73Mb L: 90/90 MS: 1 CopyPart- 00:09:12.189 [2024-07-25 15:57:30.155463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.189 [2024-07-25 15:57:30.155495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.155612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.189 [2024-07-25 15:57:30.155631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.155734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.189 [2024-07-25 15:57:30.155755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.155868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:12.189 [2024-07-25 15:57:30.155888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.189 [2024-07-25 15:57:30.155991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:12.189 [2024-07-25 15:57:30.156011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.189 #63 NEW cov: 12303 ft: 15054 corp: 30/1953b lim: 90 exec/s: 63 rss: 73Mb L: 90/90 MS: 1 InsertByte- 00:09:12.448 [2024-07-25 15:57:30.205700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.448 [2024-07-25 15:57:30.205726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.205819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.448 [2024-07-25 15:57:30.205840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.205912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.448 [2024-07-25 15:57:30.205929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.206032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:12.448 [2024-07-25 15:57:30.206052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.448 #64 NEW cov: 12303 ft: 15093 corp: 31/2041b lim: 90 exec/s: 64 rss: 73Mb L: 88/90 MS: 1 ShuffleBytes- 00:09:12.448 [2024-07-25 15:57:30.256120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.448 [2024-07-25 15:57:30.256148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.256282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.448 [2024-07-25 15:57:30.256299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.256399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.448 [2024-07-25 15:57:30.256418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.256542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:12.448 [2024-07-25 15:57:30.256562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.256672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:12.448 [2024-07-25 15:57:30.256692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.448 #65 NEW cov: 12303 ft: 15151 corp: 32/2131b lim: 90 exec/s: 65 rss: 73Mb L: 90/90 MS: 1 ShuffleBytes- 00:09:12.448 [2024-07-25 15:57:30.325472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.448 [2024-07-25 15:57:30.325500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.448 [2024-07-25 15:57:30.325582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.449 [2024-07-25 15:57:30.325601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.449 #66 NEW cov: 12303 ft: 15164 corp: 33/2175b lim: 90 exec/s: 66 rss: 73Mb L: 44/90 MS: 1 EraseBytes- 00:09:12.449 [2024-07-25 15:57:30.376091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.449 [2024-07-25 15:57:30.376119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.449 [2024-07-25 15:57:30.376207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.449 [2024-07-25 15:57:30.376223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.449 [2024-07-25 15:57:30.376295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.449 [2024-07-25 15:57:30.376312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.449 #67 NEW cov: 12303 ft: 15211 corp: 34/2234b lim: 90 exec/s: 67 rss: 73Mb L: 59/90 MS: 1 ShuffleBytes- 00:09:12.449 [2024-07-25 15:57:30.426798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.449 [2024-07-25 15:57:30.426823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.449 [2024-07-25 15:57:30.426936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.449 [2024-07-25 15:57:30.426957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.449 [2024-07-25 15:57:30.427038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.449 [2024-07-25 15:57:30.427057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.449 [2024-07-25 15:57:30.427164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:12.449 [2024-07-25 15:57:30.427183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.708 #68 NEW cov: 12303 ft: 15234 corp: 35/2320b lim: 90 exec/s: 68 rss: 73Mb L: 86/90 MS: 1 EraseBytes- 00:09:12.708 [2024-07-25 15:57:30.496419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.708 [2024-07-25 15:57:30.496450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.496557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.708 [2024-07-25 15:57:30.496576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.708 #69 NEW cov: 12303 ft: 15251 corp: 36/2359b lim: 90 exec/s: 69 rss: 73Mb L: 39/90 MS: 1 ChangeBit- 00:09:12.708 [2024-07-25 15:57:30.566819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.708 [2024-07-25 15:57:30.566846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.566929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.708 [2024-07-25 15:57:30.566949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.618419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.708 [2024-07-25 15:57:30.618453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.618576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.708 [2024-07-25 15:57:30.618598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.618686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.708 [2024-07-25 15:57:30.618706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.618815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:12.708 [2024-07-25 15:57:30.618832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.618943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:12.708 [2024-07-25 15:57:30.618962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.708 #71 NEW cov: 12310 ft: 15316 corp: 37/2449b lim: 90 exec/s: 71 rss: 73Mb L: 90/90 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:12.708 [2024-07-25 15:57:30.668245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.708 [2024-07-25 15:57:30.668272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.668375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.708 [2024-07-25 15:57:30.668393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.708 [2024-07-25 15:57:30.668491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.708 [2024-07-25 15:57:30.668511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.708 #72 NEW cov: 12310 ft: 15320 corp: 38/2508b lim: 90 exec/s: 72 rss: 73Mb L: 59/90 MS: 1 CopyPart- 00:09:12.967 [2024-07-25 15:57:30.719160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:12.967 [2024-07-25 15:57:30.719186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.967 [2024-07-25 15:57:30.719333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:12.967 [2024-07-25 15:57:30.719353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.967 [2024-07-25 15:57:30.719453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:12.967 [2024-07-25 15:57:30.719472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.967 [2024-07-25 15:57:30.719584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:12.967 [2024-07-25 15:57:30.719601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.967 [2024-07-25 15:57:30.719713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:12.967 [2024-07-25 15:57:30.719727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.967 #73 NEW cov: 12310 ft: 15329 corp: 39/2598b lim: 90 exec/s: 36 rss: 73Mb L: 90/90 MS: 1 CrossOver- 00:09:12.967 #73 DONE cov: 12310 ft: 15329 corp: 39/2598b lim: 90 exec/s: 36 rss: 73Mb 00:09:12.967 ###### Recommended dictionary. ###### 00:09:12.967 "\000\000\000E" # Uses: 0 00:09:12.967 "\016\000\000\000\000\000\000\000" # Uses: 0 00:09:12.967 ###### End of recommended dictionary. ###### 00:09:12.967 Done 73 runs in 2 second(s) 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:12.967 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:12.968 15:57:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:09:12.968 [2024-07-25 15:57:30.915268] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:12.968 [2024-07-25 15:57:30.915348] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174211 ] 00:09:12.968 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.226 [2024-07-25 15:57:31.093532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.226 [2024-07-25 15:57:31.158950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.484 [2024-07-25 15:57:31.217691] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.484 [2024-07-25 15:57:31.233922] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:09:13.484 INFO: Running with entropic power schedule (0xFF, 100). 00:09:13.484 INFO: Seed: 4139902930 00:09:13.484 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:13.484 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:13.484 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:13.484 INFO: A corpus is not provided, starting from an empty corpus 00:09:13.484 #2 INITED exec/s: 0 rss: 63Mb 00:09:13.484 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:13.484 This may also happen if the target rejected all inputs we tried so far 00:09:13.484 [2024-07-25 15:57:31.283495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.484 [2024-07-25 15:57:31.283531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.484 [2024-07-25 15:57:31.283601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:13.484 [2024-07-25 15:57:31.283619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.484 [2024-07-25 15:57:31.283680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:13.484 [2024-07-25 15:57:31.283696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.484 NEW_FUNC[1/702]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:09:13.484 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:13.484 #10 NEW cov: 12057 ft: 12056 corp: 2/39b lim: 50 exec/s: 0 rss: 70Mb L: 38/38 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:09:13.484 [2024-07-25 15:57:31.443893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.484 [2024-07-25 15:57:31.443934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.484 [2024-07-25 15:57:31.443999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:13.484 [2024-07-25 15:57:31.444016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.484 [2024-07-25 15:57:31.444077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:13.484 [2024-07-25 15:57:31.444094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.742 #11 NEW cov: 12170 ft: 12619 corp: 3/78b lim: 50 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 InsertByte- 00:09:13.742 [2024-07-25 15:57:31.503624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.742 [2024-07-25 15:57:31.503648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.742 #17 NEW cov: 12176 ft: 13576 corp: 4/91b lim: 50 exec/s: 0 rss: 70Mb L: 13/39 MS: 1 CrossOver- 00:09:13.742 [2024-07-25 15:57:31.544191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.742 [2024-07-25 15:57:31.544215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.742 [2024-07-25 15:57:31.544288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:13.742 [2024-07-25 15:57:31.544301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.742 [2024-07-25 15:57:31.544355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:13.742 [2024-07-25 15:57:31.544368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.742 [2024-07-25 15:57:31.544422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:13.742 [2024-07-25 15:57:31.544433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.742 #18 NEW cov: 12261 ft: 14094 corp: 5/140b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:09:13.742 [2024-07-25 15:57:31.593863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.742 [2024-07-25 15:57:31.593887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.742 #24 NEW cov: 12261 ft: 14243 corp: 6/159b lim: 50 exec/s: 0 rss: 71Mb L: 19/49 MS: 1 InsertRepeatedBytes- 00:09:13.742 [2024-07-25 15:57:31.634458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.742 [2024-07-25 15:57:31.634482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.742 [2024-07-25 15:57:31.634534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:13.742 [2024-07-25 15:57:31.634546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.742 [2024-07-25 15:57:31.634600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:13.743 [2024-07-25 15:57:31.634613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.743 [2024-07-25 15:57:31.634666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:13.743 [2024-07-25 15:57:31.634679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.743 #25 NEW cov: 12261 ft: 14384 corp: 7/208b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 CopyPart- 00:09:13.743 [2024-07-25 15:57:31.674221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.743 [2024-07-25 15:57:31.674244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.743 [2024-07-25 15:57:31.674285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:13.743 [2024-07-25 15:57:31.674297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.743 #26 NEW cov: 12261 ft: 14719 corp: 8/228b lim: 50 exec/s: 0 rss: 71Mb L: 20/49 MS: 1 InsertByte- 00:09:13.743 [2024-07-25 15:57:31.724326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:13.743 [2024-07-25 15:57:31.724350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.743 [2024-07-25 15:57:31.724390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:13.743 [2024-07-25 15:57:31.724403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.001 #27 NEW cov: 12261 ft: 14804 corp: 9/249b lim: 50 exec/s: 0 rss: 71Mb L: 21/49 MS: 1 InsertByte- 00:09:14.001 [2024-07-25 15:57:31.774818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.001 [2024-07-25 15:57:31.774841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.001 [2024-07-25 15:57:31.774909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.001 [2024-07-25 15:57:31.774922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.001 [2024-07-25 15:57:31.774976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.001 [2024-07-25 15:57:31.774989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.001 [2024-07-25 15:57:31.775043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.001 [2024-07-25 15:57:31.775056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.001 #28 NEW cov: 12261 ft: 14829 corp: 10/289b lim: 50 exec/s: 0 rss: 71Mb L: 40/49 MS: 1 CopyPart- 00:09:14.001 [2024-07-25 15:57:31.814949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.001 [2024-07-25 15:57:31.814975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.001 [2024-07-25 15:57:31.815037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.001 [2024-07-25 15:57:31.815050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.001 [2024-07-25 15:57:31.815104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.001 [2024-07-25 15:57:31.815117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.815169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.002 [2024-07-25 15:57:31.815182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.002 #29 NEW cov: 12261 ft: 14898 corp: 11/330b lim: 50 exec/s: 0 rss: 71Mb L: 41/49 MS: 1 InsertByte- 00:09:14.002 [2024-07-25 15:57:31.865058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.002 [2024-07-25 15:57:31.865084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.865137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.002 [2024-07-25 15:57:31.865150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.865200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.002 [2024-07-25 15:57:31.865213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.865264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.002 [2024-07-25 15:57:31.865276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.002 #30 NEW cov: 12261 ft: 14980 corp: 12/379b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 ChangeBinInt- 00:09:14.002 [2024-07-25 15:57:31.915376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.002 [2024-07-25 15:57:31.915400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.915467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.002 [2024-07-25 15:57:31.915477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.915531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.002 [2024-07-25 15:57:31.915544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.915595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.002 [2024-07-25 15:57:31.915608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.915662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:14.002 [2024-07-25 15:57:31.915674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:14.002 #31 NEW cov: 12261 ft: 15026 corp: 13/429b lim: 50 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 CrossOver- 00:09:14.002 [2024-07-25 15:57:31.955028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.002 [2024-07-25 15:57:31.955054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.002 [2024-07-25 15:57:31.955109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.002 [2024-07-25 15:57:31.955121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.002 #32 NEW cov: 12261 ft: 15049 corp: 14/449b lim: 50 exec/s: 0 rss: 71Mb L: 20/50 MS: 1 ChangeBinInt- 00:09:14.261 [2024-07-25 15:57:31.995150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.261 [2024-07-25 15:57:31.995174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.261 [2024-07-25 15:57:31.995218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.261 [2024-07-25 15:57:31.995230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.261 #33 NEW cov: 12261 ft: 15067 corp: 15/469b lim: 50 exec/s: 0 rss: 71Mb L: 20/50 MS: 1 ChangeByte- 00:09:14.261 [2024-07-25 15:57:32.035414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.261 [2024-07-25 15:57:32.035441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.261 [2024-07-25 15:57:32.035480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.261 [2024-07-25 15:57:32.035494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.262 [2024-07-25 15:57:32.035547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.262 [2024-07-25 15:57:32.035560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.262 #39 NEW cov: 12261 ft: 15093 corp: 16/506b lim: 50 exec/s: 0 rss: 72Mb L: 37/50 MS: 1 CrossOver- 00:09:14.262 [2024-07-25 15:57:32.085382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.262 [2024-07-25 15:57:32.085406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.262 [2024-07-25 15:57:32.085464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.262 [2024-07-25 15:57:32.085477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.262 #44 NEW cov: 12261 ft: 15112 corp: 17/527b lim: 50 exec/s: 0 rss: 72Mb L: 21/50 MS: 5 CopyPart-CopyPart-ShuffleBytes-ShuffleBytes-CrossOver- 00:09:14.262 [2024-07-25 15:57:32.125645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.262 [2024-07-25 15:57:32.125669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.262 [2024-07-25 15:57:32.125738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.262 [2024-07-25 15:57:32.125752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.262 [2024-07-25 15:57:32.125808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.262 [2024-07-25 15:57:32.125821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.262 #45 NEW cov: 12261 ft: 15122 corp: 18/565b lim: 50 exec/s: 0 rss: 72Mb L: 38/50 MS: 1 CopyPart- 00:09:14.262 [2024-07-25 15:57:32.165946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.262 [2024-07-25 15:57:32.165972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.262 [2024-07-25 15:57:32.166040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.262 [2024-07-25 15:57:32.166052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.262 [2024-07-25 15:57:32.166104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.262 [2024-07-25 15:57:32.166116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.262 [2024-07-25 15:57:32.166167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.262 [2024-07-25 15:57:32.166180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.262 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:14.262 #46 NEW cov: 12284 ft: 15156 corp: 19/611b lim: 50 exec/s: 0 rss: 72Mb L: 46/50 MS: 1 InsertRepeatedBytes- 00:09:14.262 [2024-07-25 15:57:32.205596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.262 [2024-07-25 15:57:32.205620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.262 #47 NEW cov: 12284 ft: 15181 corp: 20/623b lim: 50 exec/s: 0 rss: 72Mb L: 12/50 MS: 1 EraseBytes- 00:09:14.521 [2024-07-25 15:57:32.256231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.521 [2024-07-25 15:57:32.256256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.521 [2024-07-25 15:57:32.256311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.522 [2024-07-25 15:57:32.256323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.256376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.522 [2024-07-25 15:57:32.256389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.256442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.522 [2024-07-25 15:57:32.256454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.522 #48 NEW cov: 12284 ft: 15203 corp: 21/672b lim: 50 exec/s: 48 rss: 72Mb L: 49/50 MS: 1 ChangeBinInt- 00:09:14.522 [2024-07-25 15:57:32.295898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.522 [2024-07-25 15:57:32.295925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.522 #49 NEW cov: 12284 ft: 15287 corp: 22/685b lim: 50 exec/s: 49 rss: 72Mb L: 13/50 MS: 1 ShuffleBytes- 00:09:14.522 [2024-07-25 15:57:32.336551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.522 [2024-07-25 15:57:32.336576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.336643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.522 [2024-07-25 15:57:32.336655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.336707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.522 [2024-07-25 15:57:32.336723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.336774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.522 [2024-07-25 15:57:32.336787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.336840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:14.522 [2024-07-25 15:57:32.336852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:14.522 #50 NEW cov: 12284 ft: 15338 corp: 23/735b lim: 50 exec/s: 50 rss: 72Mb L: 50/50 MS: 1 ChangeBinInt- 00:09:14.522 [2024-07-25 15:57:32.376223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.522 [2024-07-25 15:57:32.376247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.376288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.522 [2024-07-25 15:57:32.376301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.522 #51 NEW cov: 12284 ft: 15380 corp: 24/756b lim: 50 exec/s: 51 rss: 72Mb L: 21/50 MS: 1 ChangeBinInt- 00:09:14.522 [2024-07-25 15:57:32.426855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.522 [2024-07-25 15:57:32.426880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.426933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.522 [2024-07-25 15:57:32.426943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.426993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.522 [2024-07-25 15:57:32.427006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.427057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.522 [2024-07-25 15:57:32.427069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.427123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:14.522 [2024-07-25 15:57:32.427135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:14.522 #52 NEW cov: 12284 ft: 15390 corp: 25/806b lim: 50 exec/s: 52 rss: 72Mb L: 50/50 MS: 1 ChangeBinInt- 00:09:14.522 [2024-07-25 15:57:32.466624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.522 [2024-07-25 15:57:32.466649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.466716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.522 [2024-07-25 15:57:32.466728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.522 [2024-07-25 15:57:32.466790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.522 [2024-07-25 15:57:32.466804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.522 #53 NEW cov: 12284 ft: 15470 corp: 26/845b lim: 50 exec/s: 53 rss: 72Mb L: 39/50 MS: 1 InsertByte- 00:09:14.781 [2024-07-25 15:57:32.516649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.781 [2024-07-25 15:57:32.516674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.781 [2024-07-25 15:57:32.516711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.781 [2024-07-25 15:57:32.516724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.781 #54 NEW cov: 12284 ft: 15489 corp: 27/865b lim: 50 exec/s: 54 rss: 72Mb L: 20/50 MS: 1 ChangeBinInt- 00:09:14.782 [2024-07-25 15:57:32.556604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.782 [2024-07-25 15:57:32.556629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.782 #55 NEW cov: 12284 ft: 15492 corp: 28/878b lim: 50 exec/s: 55 rss: 72Mb L: 13/50 MS: 1 ShuffleBytes- 00:09:14.782 [2024-07-25 15:57:32.596850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.782 [2024-07-25 15:57:32.596874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.596921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.782 [2024-07-25 15:57:32.596933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.782 #56 NEW cov: 12284 ft: 15503 corp: 29/898b lim: 50 exec/s: 56 rss: 72Mb L: 20/50 MS: 1 CopyPart- 00:09:14.782 [2024-07-25 15:57:32.646994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.782 [2024-07-25 15:57:32.647018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.647076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.782 [2024-07-25 15:57:32.647090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.782 #57 NEW cov: 12284 ft: 15514 corp: 30/918b lim: 50 exec/s: 57 rss: 72Mb L: 20/50 MS: 1 ChangeBit- 00:09:14.782 [2024-07-25 15:57:32.687588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.782 [2024-07-25 15:57:32.687611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.687661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.782 [2024-07-25 15:57:32.687671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.687722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.782 [2024-07-25 15:57:32.687734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.687788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:14.782 [2024-07-25 15:57:32.687800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.687852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:14.782 [2024-07-25 15:57:32.687865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:14.782 #58 NEW cov: 12284 ft: 15522 corp: 31/968b lim: 50 exec/s: 58 rss: 72Mb L: 50/50 MS: 1 ChangeBit- 00:09:14.782 [2024-07-25 15:57:32.737405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:14.782 [2024-07-25 15:57:32.737429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.737482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:14.782 [2024-07-25 15:57:32.737494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:14.782 [2024-07-25 15:57:32.737547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:14.782 [2024-07-25 15:57:32.737560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:14.782 #59 NEW cov: 12284 ft: 15543 corp: 32/999b lim: 50 exec/s: 59 rss: 72Mb L: 31/50 MS: 1 InsertRepeatedBytes- 00:09:15.041 [2024-07-25 15:57:32.777535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.041 [2024-07-25 15:57:32.777559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.777609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.041 [2024-07-25 15:57:32.777622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.777675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.041 [2024-07-25 15:57:32.777687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.041 #60 NEW cov: 12284 ft: 15628 corp: 33/1036b lim: 50 exec/s: 60 rss: 72Mb L: 37/50 MS: 1 ShuffleBytes- 00:09:15.041 [2024-07-25 15:57:32.827621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.041 [2024-07-25 15:57:32.827644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.827701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.041 [2024-07-25 15:57:32.827715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.827774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.041 [2024-07-25 15:57:32.827787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.041 #61 NEW cov: 12284 ft: 15637 corp: 34/1075b lim: 50 exec/s: 61 rss: 72Mb L: 39/50 MS: 1 ChangeBinInt- 00:09:15.041 [2024-07-25 15:57:32.877811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.041 [2024-07-25 15:57:32.877837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.877904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.041 [2024-07-25 15:57:32.877916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.877971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.041 [2024-07-25 15:57:32.877984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.041 #62 NEW cov: 12284 ft: 15653 corp: 35/1114b lim: 50 exec/s: 62 rss: 72Mb L: 39/50 MS: 1 ChangeBinInt- 00:09:15.041 [2024-07-25 15:57:32.928108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.041 [2024-07-25 15:57:32.928135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.928203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.041 [2024-07-25 15:57:32.928216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.928268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.041 [2024-07-25 15:57:32.928281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.041 [2024-07-25 15:57:32.928334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:15.041 [2024-07-25 15:57:32.928347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:15.041 #63 NEW cov: 12284 ft: 15658 corp: 36/1163b lim: 50 exec/s: 63 rss: 72Mb L: 49/50 MS: 1 ChangeBinInt- 00:09:15.041 [2024-07-25 15:57:32.968065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.042 [2024-07-25 15:57:32.968089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.042 [2024-07-25 15:57:32.968157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.042 [2024-07-25 15:57:32.968169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.042 [2024-07-25 15:57:32.968223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.042 [2024-07-25 15:57:32.968237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.042 #64 NEW cov: 12284 ft: 15671 corp: 37/1193b lim: 50 exec/s: 64 rss: 72Mb L: 30/50 MS: 1 CrossOver- 00:09:15.042 [2024-07-25 15:57:33.007876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.042 [2024-07-25 15:57:33.007901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.301 #65 NEW cov: 12284 ft: 15692 corp: 38/1206b lim: 50 exec/s: 65 rss: 72Mb L: 13/50 MS: 1 ChangeBinInt- 00:09:15.301 [2024-07-25 15:57:33.058626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.301 [2024-07-25 15:57:33.058649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.058701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.301 [2024-07-25 15:57:33.058711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.058763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.301 [2024-07-25 15:57:33.058775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.058845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:15.301 [2024-07-25 15:57:33.058857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.058911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:15.301 [2024-07-25 15:57:33.058924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:15.301 #66 NEW cov: 12284 ft: 15708 corp: 39/1256b lim: 50 exec/s: 66 rss: 72Mb L: 50/50 MS: 1 CopyPart- 00:09:15.301 [2024-07-25 15:57:33.098456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.301 [2024-07-25 15:57:33.098482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.098537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.301 [2024-07-25 15:57:33.098550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.098604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.301 [2024-07-25 15:57:33.098618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.301 #67 NEW cov: 12284 ft: 15714 corp: 40/1294b lim: 50 exec/s: 67 rss: 72Mb L: 38/50 MS: 1 CopyPart- 00:09:15.301 [2024-07-25 15:57:33.138385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.301 [2024-07-25 15:57:33.138408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.138446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.301 [2024-07-25 15:57:33.138459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.301 #68 NEW cov: 12284 ft: 15730 corp: 41/1314b lim: 50 exec/s: 68 rss: 72Mb L: 20/50 MS: 1 ChangeByte- 00:09:15.301 [2024-07-25 15:57:33.178854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.301 [2024-07-25 15:57:33.178879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.178929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.301 [2024-07-25 15:57:33.178939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.178992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.301 [2024-07-25 15:57:33.179005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.179057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:15.301 [2024-07-25 15:57:33.179069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:15.301 #69 NEW cov: 12284 ft: 15737 corp: 42/1355b lim: 50 exec/s: 69 rss: 72Mb L: 41/50 MS: 1 ChangeBit- 00:09:15.301 [2024-07-25 15:57:33.228854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.301 [2024-07-25 15:57:33.228879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.228935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.301 [2024-07-25 15:57:33.228948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.229002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.301 [2024-07-25 15:57:33.229016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.301 #70 NEW cov: 12284 ft: 15745 corp: 43/1393b lim: 50 exec/s: 70 rss: 72Mb L: 38/50 MS: 1 ChangeByte- 00:09:15.301 [2024-07-25 15:57:33.279125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:15.301 [2024-07-25 15:57:33.279152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.279224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:15.301 [2024-07-25 15:57:33.279237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.279290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:15.301 [2024-07-25 15:57:33.279303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.301 [2024-07-25 15:57:33.279357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:15.301 [2024-07-25 15:57:33.279370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:15.560 #71 NEW cov: 12284 ft: 15756 corp: 44/1434b lim: 50 exec/s: 35 rss: 72Mb L: 41/50 MS: 1 ChangeByte- 00:09:15.560 #71 DONE cov: 12284 ft: 15756 corp: 44/1434b lim: 50 exec/s: 35 rss: 72Mb 00:09:15.560 Done 71 runs in 2 second(s) 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:15.560 15:57:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:09:15.560 [2024-07-25 15:57:33.455520] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:15.560 [2024-07-25 15:57:33.455588] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174643 ] 00:09:15.560 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.819 [2024-07-25 15:57:33.635856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.819 [2024-07-25 15:57:33.701624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.819 [2024-07-25 15:57:33.760399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.819 [2024-07-25 15:57:33.776649] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:09:15.819 INFO: Running with entropic power schedule (0xFF, 100). 00:09:15.819 INFO: Seed: 2388933429 00:09:16.077 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:16.077 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:16.077 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:16.077 INFO: A corpus is not provided, starting from an empty corpus 00:09:16.077 #2 INITED exec/s: 0 rss: 63Mb 00:09:16.077 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:16.077 This may also happen if the target rejected all inputs we tried so far 00:09:16.077 [2024-07-25 15:57:33.844202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.077 [2024-07-25 15:57:33.844239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.077 [2024-07-25 15:57:33.844338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.078 [2024-07-25 15:57:33.844352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.078 [2024-07-25 15:57:33.844441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.078 [2024-07-25 15:57:33.844458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.078 NEW_FUNC[1/702]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:09:16.078 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:16.078 #9 NEW cov: 12083 ft: 12084 corp: 2/52b lim: 85 exec/s: 0 rss: 71Mb L: 51/51 MS: 2 CrossOver-InsertRepeatedBytes- 00:09:16.078 [2024-07-25 15:57:34.014628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.078 [2024-07-25 15:57:34.014669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.078 [2024-07-25 15:57:34.014752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.078 [2024-07-25 15:57:34.014772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.078 [2024-07-25 15:57:34.014862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.078 [2024-07-25 15:57:34.014880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.078 #10 NEW cov: 12196 ft: 12677 corp: 3/103b lim: 85 exec/s: 0 rss: 71Mb L: 51/51 MS: 1 ShuffleBytes- 00:09:16.337 [2024-07-25 15:57:34.084965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.337 [2024-07-25 15:57:34.084990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.085083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.337 [2024-07-25 15:57:34.085099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.085186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.337 [2024-07-25 15:57:34.085198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.337 #11 NEW cov: 12202 ft: 12985 corp: 4/154b lim: 85 exec/s: 0 rss: 71Mb L: 51/51 MS: 1 ChangeBit- 00:09:16.337 [2024-07-25 15:57:34.145186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.337 [2024-07-25 15:57:34.145209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.145301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.337 [2024-07-25 15:57:34.145317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.145407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.337 [2024-07-25 15:57:34.145422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.337 #12 NEW cov: 12287 ft: 13165 corp: 5/205b lim: 85 exec/s: 0 rss: 72Mb L: 51/51 MS: 1 ShuffleBytes- 00:09:16.337 [2024-07-25 15:57:34.195094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.337 [2024-07-25 15:57:34.195119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.195199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.337 [2024-07-25 15:57:34.195215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.337 #13 NEW cov: 12287 ft: 13636 corp: 6/250b lim: 85 exec/s: 0 rss: 72Mb L: 45/51 MS: 1 EraseBytes- 00:09:16.337 [2024-07-25 15:57:34.265764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.337 [2024-07-25 15:57:34.265792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.265874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.337 [2024-07-25 15:57:34.265892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.265977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.337 [2024-07-25 15:57:34.265991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.337 #14 NEW cov: 12287 ft: 13706 corp: 7/302b lim: 85 exec/s: 0 rss: 72Mb L: 52/52 MS: 1 InsertByte- 00:09:16.337 [2024-07-25 15:57:34.316050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.337 [2024-07-25 15:57:34.316075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.316165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.337 [2024-07-25 15:57:34.316180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.337 [2024-07-25 15:57:34.316275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.337 [2024-07-25 15:57:34.316288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.596 #15 NEW cov: 12287 ft: 13764 corp: 8/353b lim: 85 exec/s: 0 rss: 72Mb L: 51/52 MS: 1 ChangeBinInt- 00:09:16.596 [2024-07-25 15:57:34.376285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.596 [2024-07-25 15:57:34.376310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.376383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.596 [2024-07-25 15:57:34.376397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.376466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.596 [2024-07-25 15:57:34.376483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.596 #21 NEW cov: 12287 ft: 13791 corp: 9/404b lim: 85 exec/s: 0 rss: 72Mb L: 51/52 MS: 1 CrossOver- 00:09:16.596 [2024-07-25 15:57:34.436506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.596 [2024-07-25 15:57:34.436533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.436612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.596 [2024-07-25 15:57:34.436627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.436705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.596 [2024-07-25 15:57:34.436721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.596 #22 NEW cov: 12287 ft: 13826 corp: 10/455b lim: 85 exec/s: 0 rss: 72Mb L: 51/52 MS: 1 ChangeBit- 00:09:16.596 [2024-07-25 15:57:34.486853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.596 [2024-07-25 15:57:34.486877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.486985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.596 [2024-07-25 15:57:34.487001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.487092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.596 [2024-07-25 15:57:34.487106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.596 #23 NEW cov: 12287 ft: 13869 corp: 11/507b lim: 85 exec/s: 0 rss: 72Mb L: 52/52 MS: 1 ShuffleBytes- 00:09:16.596 [2024-07-25 15:57:34.547464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.596 [2024-07-25 15:57:34.547488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.547582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.596 [2024-07-25 15:57:34.547597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.547693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.596 [2024-07-25 15:57:34.547704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.596 [2024-07-25 15:57:34.547790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:16.596 [2024-07-25 15:57:34.547817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:16.596 #24 NEW cov: 12287 ft: 14301 corp: 12/583b lim: 85 exec/s: 0 rss: 72Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:09:16.855 [2024-07-25 15:57:34.597056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.855 [2024-07-25 15:57:34.597085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.597136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.855 [2024-07-25 15:57:34.597152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.855 #25 NEW cov: 12287 ft: 14334 corp: 13/619b lim: 85 exec/s: 0 rss: 72Mb L: 36/76 MS: 1 CrossOver- 00:09:16.855 [2024-07-25 15:57:34.647826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.855 [2024-07-25 15:57:34.647850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.647919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.855 [2024-07-25 15:57:34.647932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.648015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.855 [2024-07-25 15:57:34.648029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.855 #26 NEW cov: 12287 ft: 14358 corp: 14/670b lim: 85 exec/s: 0 rss: 72Mb L: 51/76 MS: 1 ChangeBinInt- 00:09:16.855 [2024-07-25 15:57:34.698066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.855 [2024-07-25 15:57:34.698090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.698179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.855 [2024-07-25 15:57:34.698194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.698279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.855 [2024-07-25 15:57:34.698291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.855 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:16.855 #27 NEW cov: 12310 ft: 14424 corp: 15/728b lim: 85 exec/s: 0 rss: 72Mb L: 58/76 MS: 1 InsertRepeatedBytes- 00:09:16.855 [2024-07-25 15:57:34.768962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.855 [2024-07-25 15:57:34.768988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.769110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.855 [2024-07-25 15:57:34.769124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.769211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.855 [2024-07-25 15:57:34.769228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.769315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:16.855 [2024-07-25 15:57:34.769329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:16.855 #28 NEW cov: 12310 ft: 14548 corp: 16/805b lim: 85 exec/s: 0 rss: 72Mb L: 77/77 MS: 1 CopyPart- 00:09:16.855 [2024-07-25 15:57:34.838753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:16.855 [2024-07-25 15:57:34.838786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.838855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:16.855 [2024-07-25 15:57:34.838869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:16.855 [2024-07-25 15:57:34.838967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:16.855 [2024-07-25 15:57:34.838983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.114 #29 NEW cov: 12310 ft: 14581 corp: 17/857b lim: 85 exec/s: 29 rss: 72Mb L: 52/77 MS: 1 InsertByte- 00:09:17.114 [2024-07-25 15:57:34.899327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.114 [2024-07-25 15:57:34.899352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:34.899436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.114 [2024-07-25 15:57:34.899454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:34.899536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.114 [2024-07-25 15:57:34.899549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.114 #30 NEW cov: 12310 ft: 14660 corp: 18/908b lim: 85 exec/s: 30 rss: 72Mb L: 51/77 MS: 1 ChangeBit- 00:09:17.114 [2024-07-25 15:57:34.949857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.114 [2024-07-25 15:57:34.949883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:34.949990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.114 [2024-07-25 15:57:34.950007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:34.950098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.114 [2024-07-25 15:57:34.950110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.114 #31 NEW cov: 12310 ft: 14668 corp: 19/960b lim: 85 exec/s: 31 rss: 72Mb L: 52/77 MS: 1 ShuffleBytes- 00:09:17.114 [2024-07-25 15:57:35.020423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.114 [2024-07-25 15:57:35.020450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:35.020530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.114 [2024-07-25 15:57:35.020546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:35.020628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.114 [2024-07-25 15:57:35.020645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:35.020728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:17.114 [2024-07-25 15:57:35.020744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:17.114 #32 NEW cov: 12310 ft: 14725 corp: 20/1032b lim: 85 exec/s: 32 rss: 72Mb L: 72/77 MS: 1 CopyPart- 00:09:17.114 [2024-07-25 15:57:35.080263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.114 [2024-07-25 15:57:35.080290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:35.080383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.114 [2024-07-25 15:57:35.080400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.114 [2024-07-25 15:57:35.080484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.114 [2024-07-25 15:57:35.080501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.373 #33 NEW cov: 12310 ft: 14743 corp: 21/1083b lim: 85 exec/s: 33 rss: 72Mb L: 51/77 MS: 1 CopyPart- 00:09:17.373 [2024-07-25 15:57:35.150426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.373 [2024-07-25 15:57:35.150453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.150517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.373 [2024-07-25 15:57:35.150533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.373 #39 NEW cov: 12310 ft: 14749 corp: 22/1128b lim: 85 exec/s: 39 rss: 72Mb L: 45/77 MS: 1 ShuffleBytes- 00:09:17.373 [2024-07-25 15:57:35.200976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.373 [2024-07-25 15:57:35.201005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.201088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.373 [2024-07-25 15:57:35.201108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.201189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.373 [2024-07-25 15:57:35.201206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.251975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.373 [2024-07-25 15:57:35.252002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.252094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.373 [2024-07-25 15:57:35.252110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.252152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.373 [2024-07-25 15:57:35.252169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.252229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:17.373 [2024-07-25 15:57:35.252244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.252325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:09:17.373 [2024-07-25 15:57:35.252342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:17.373 #41 NEW cov: 12310 ft: 14853 corp: 23/1213b lim: 85 exec/s: 41 rss: 72Mb L: 85/85 MS: 2 CMP-InsertRepeatedBytes- DE: "\036\202\355\003p\331\027\000"- 00:09:17.373 [2024-07-25 15:57:35.301434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.373 [2024-07-25 15:57:35.301460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.301542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.373 [2024-07-25 15:57:35.301557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.301639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.373 [2024-07-25 15:57:35.301654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.373 #42 NEW cov: 12310 ft: 14879 corp: 24/1264b lim: 85 exec/s: 42 rss: 72Mb L: 51/85 MS: 1 ChangeBit- 00:09:17.373 [2024-07-25 15:57:35.351188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.373 [2024-07-25 15:57:35.351213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.373 [2024-07-25 15:57:35.351288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.374 [2024-07-25 15:57:35.351303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.633 #43 NEW cov: 12310 ft: 14894 corp: 25/1309b lim: 85 exec/s: 43 rss: 72Mb L: 45/85 MS: 1 ShuffleBytes- 00:09:17.633 [2024-07-25 15:57:35.401934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.633 [2024-07-25 15:57:35.401958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.402071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.633 [2024-07-25 15:57:35.402086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.402168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.633 [2024-07-25 15:57:35.402181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.633 #44 NEW cov: 12310 ft: 14902 corp: 26/1360b lim: 85 exec/s: 44 rss: 72Mb L: 51/85 MS: 1 ChangeByte- 00:09:17.633 [2024-07-25 15:57:35.462385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.633 [2024-07-25 15:57:35.462409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.462505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.633 [2024-07-25 15:57:35.462522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.462608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.633 [2024-07-25 15:57:35.462620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.633 #45 NEW cov: 12310 ft: 14948 corp: 27/1412b lim: 85 exec/s: 45 rss: 72Mb L: 52/85 MS: 1 InsertByte- 00:09:17.633 [2024-07-25 15:57:35.513148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.633 [2024-07-25 15:57:35.513175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.513270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.633 [2024-07-25 15:57:35.513285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.513371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.633 [2024-07-25 15:57:35.513384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.513468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:17.633 [2024-07-25 15:57:35.513484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:17.633 #46 NEW cov: 12310 ft: 15018 corp: 28/1488b lim: 85 exec/s: 46 rss: 72Mb L: 76/85 MS: 1 ShuffleBytes- 00:09:17.633 [2024-07-25 15:57:35.582640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.633 [2024-07-25 15:57:35.582665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.633 [2024-07-25 15:57:35.582763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.633 [2024-07-25 15:57:35.582782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.633 #47 NEW cov: 12310 ft: 15027 corp: 29/1534b lim: 85 exec/s: 47 rss: 73Mb L: 46/85 MS: 1 CrossOver- 00:09:17.893 [2024-07-25 15:57:35.642838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.893 [2024-07-25 15:57:35.642865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.893 [2024-07-25 15:57:35.642945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.893 [2024-07-25 15:57:35.642960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.893 #48 NEW cov: 12310 ft: 15044 corp: 30/1580b lim: 85 exec/s: 48 rss: 73Mb L: 46/85 MS: 1 ChangeBinInt- 00:09:17.893 [2024-07-25 15:57:35.703351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.893 [2024-07-25 15:57:35.703374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.893 [2024-07-25 15:57:35.703465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.893 [2024-07-25 15:57:35.703479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.893 [2024-07-25 15:57:35.703563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.893 [2024-07-25 15:57:35.703574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.893 #49 NEW cov: 12310 ft: 15066 corp: 31/1632b lim: 85 exec/s: 49 rss: 73Mb L: 52/85 MS: 1 InsertByte- 00:09:17.893 [2024-07-25 15:57:35.753750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.893 [2024-07-25 15:57:35.753777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.893 [2024-07-25 15:57:35.753877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.893 [2024-07-25 15:57:35.753893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.893 [2024-07-25 15:57:35.753982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.893 [2024-07-25 15:57:35.754002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.893 #50 NEW cov: 12310 ft: 15071 corp: 32/1691b lim: 85 exec/s: 50 rss: 73Mb L: 59/85 MS: 1 PersAutoDict- DE: "\036\202\355\003p\331\027\000"- 00:09:17.893 [2024-07-25 15:57:35.804224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:17.893 [2024-07-25 15:57:35.804248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:17.893 [2024-07-25 15:57:35.804341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:17.893 [2024-07-25 15:57:35.804356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:17.893 [2024-07-25 15:57:35.804439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:17.893 [2024-07-25 15:57:35.804455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:17.893 #51 NEW cov: 12310 ft: 15118 corp: 33/1749b lim: 85 exec/s: 25 rss: 73Mb L: 58/85 MS: 1 ShuffleBytes- 00:09:17.893 #51 DONE cov: 12310 ft: 15118 corp: 33/1749b lim: 85 exec/s: 25 rss: 73Mb 00:09:17.893 ###### Recommended dictionary. ###### 00:09:17.893 "\036\202\355\003p\331\027\000" # Uses: 1 00:09:17.893 ###### End of recommended dictionary. ###### 00:09:17.893 Done 51 runs in 2 second(s) 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:18.152 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:18.153 15:57:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:09:18.153 [2024-07-25 15:57:35.997286] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:18.153 [2024-07-25 15:57:35.997366] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175078 ] 00:09:18.153 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.411 [2024-07-25 15:57:36.175024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.411 [2024-07-25 15:57:36.239872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.412 [2024-07-25 15:57:36.298667] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.412 [2024-07-25 15:57:36.314876] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:18.412 INFO: Running with entropic power schedule (0xFF, 100). 00:09:18.412 INFO: Seed: 630969201 00:09:18.412 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:18.412 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:18.412 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:18.412 INFO: A corpus is not provided, starting from an empty corpus 00:09:18.412 #2 INITED exec/s: 0 rss: 63Mb 00:09:18.412 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:18.412 This may also happen if the target rejected all inputs we tried so far 00:09:18.412 [2024-07-25 15:57:36.375376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.412 [2024-07-25 15:57:36.375412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.670 NEW_FUNC[1/701]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:18.670 NEW_FUNC[2/701]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:18.670 #6 NEW cov: 12015 ft: 12015 corp: 2/7b lim: 25 exec/s: 0 rss: 70Mb L: 6/6 MS: 4 ShuffleBytes-InsertByte-ShuffleBytes-CMP- DE: "\010\000\000\000"- 00:09:18.670 [2024-07-25 15:57:36.535936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.670 [2024-07-25 15:57:36.535990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.670 #12 NEW cov: 12129 ft: 12684 corp: 3/13b lim: 25 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:18.670 [2024-07-25 15:57:36.606042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.670 [2024-07-25 15:57:36.606073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.670 #13 NEW cov: 12135 ft: 12821 corp: 4/21b lim: 25 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:09:18.670 [2024-07-25 15:57:36.656213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.670 [2024-07-25 15:57:36.656238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.927 #14 NEW cov: 12220 ft: 13048 corp: 5/29b lim: 25 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ShuffleBytes- 00:09:18.927 [2024-07-25 15:57:36.716424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.927 [2024-07-25 15:57:36.716447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.927 #15 NEW cov: 12220 ft: 13193 corp: 6/35b lim: 25 exec/s: 0 rss: 71Mb L: 6/8 MS: 1 ShuffleBytes- 00:09:18.927 [2024-07-25 15:57:36.766583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.927 [2024-07-25 15:57:36.766606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.927 #16 NEW cov: 12220 ft: 13239 corp: 7/43b lim: 25 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:09:18.927 [2024-07-25 15:57:36.816824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.927 [2024-07-25 15:57:36.816851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.927 #17 NEW cov: 12220 ft: 13311 corp: 8/49b lim: 25 exec/s: 0 rss: 71Mb L: 6/8 MS: 1 ChangeBinInt- 00:09:18.927 [2024-07-25 15:57:36.877082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:18.927 [2024-07-25 15:57:36.877106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:18.927 #18 NEW cov: 12220 ft: 13317 corp: 9/56b lim: 25 exec/s: 0 rss: 71Mb L: 7/8 MS: 1 InsertByte- 00:09:19.183 [2024-07-25 15:57:36.927227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.183 [2024-07-25 15:57:36.927250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.183 #19 NEW cov: 12220 ft: 13361 corp: 10/62b lim: 25 exec/s: 0 rss: 71Mb L: 6/8 MS: 1 ChangeByte- 00:09:19.183 [2024-07-25 15:57:36.977731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.183 [2024-07-25 15:57:36.977754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.183 #20 NEW cov: 12220 ft: 13455 corp: 11/68b lim: 25 exec/s: 0 rss: 71Mb L: 6/8 MS: 1 ChangeBit- 00:09:19.183 [2024-07-25 15:57:37.038734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.183 [2024-07-25 15:57:37.038761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.183 [2024-07-25 15:57:37.038858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:19.183 [2024-07-25 15:57:37.038873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:19.183 [2024-07-25 15:57:37.038940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:19.183 [2024-07-25 15:57:37.038953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:19.183 [2024-07-25 15:57:37.039001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:19.183 [2024-07-25 15:57:37.039013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:19.183 #21 NEW cov: 12220 ft: 14063 corp: 12/88b lim: 25 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:09:19.183 [2024-07-25 15:57:37.098292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.183 [2024-07-25 15:57:37.098315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.183 #22 NEW cov: 12220 ft: 14209 corp: 13/95b lim: 25 exec/s: 0 rss: 72Mb L: 7/20 MS: 1 InsertByte- 00:09:19.183 [2024-07-25 15:57:37.159230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.183 [2024-07-25 15:57:37.159253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.183 [2024-07-25 15:57:37.159340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:19.183 [2024-07-25 15:57:37.159355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:19.183 [2024-07-25 15:57:37.159444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:19.183 [2024-07-25 15:57:37.159461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:19.440 #23 NEW cov: 12220 ft: 14504 corp: 14/112b lim: 25 exec/s: 0 rss: 72Mb L: 17/20 MS: 1 InsertRepeatedBytes- 00:09:19.440 [2024-07-25 15:57:37.218851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.440 [2024-07-25 15:57:37.218880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.440 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:19.440 #24 NEW cov: 12243 ft: 14528 corp: 15/118b lim: 25 exec/s: 0 rss: 72Mb L: 6/20 MS: 1 ChangeByte- 00:09:19.440 [2024-07-25 15:57:37.279315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.440 [2024-07-25 15:57:37.279340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.440 #25 NEW cov: 12243 ft: 14555 corp: 16/124b lim: 25 exec/s: 0 rss: 72Mb L: 6/20 MS: 1 CrossOver- 00:09:19.440 [2024-07-25 15:57:37.329629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.440 [2024-07-25 15:57:37.329652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.440 #26 NEW cov: 12243 ft: 14579 corp: 17/130b lim: 25 exec/s: 26 rss: 72Mb L: 6/20 MS: 1 ChangeBinInt- 00:09:19.440 [2024-07-25 15:57:37.390110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.440 [2024-07-25 15:57:37.390134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.440 #27 NEW cov: 12243 ft: 14621 corp: 18/136b lim: 25 exec/s: 27 rss: 72Mb L: 6/20 MS: 1 CopyPart- 00:09:19.697 [2024-07-25 15:57:37.440442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.697 [2024-07-25 15:57:37.440467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.697 #28 NEW cov: 12243 ft: 14632 corp: 19/142b lim: 25 exec/s: 28 rss: 72Mb L: 6/20 MS: 1 ShuffleBytes- 00:09:19.697 [2024-07-25 15:57:37.500814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.697 [2024-07-25 15:57:37.500839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.697 #34 NEW cov: 12243 ft: 14667 corp: 20/149b lim: 25 exec/s: 34 rss: 72Mb L: 7/20 MS: 1 InsertByte- 00:09:19.697 [2024-07-25 15:57:37.551153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.698 [2024-07-25 15:57:37.551178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.698 #35 NEW cov: 12243 ft: 14678 corp: 21/156b lim: 25 exec/s: 35 rss: 72Mb L: 7/20 MS: 1 CopyPart- 00:09:19.698 [2024-07-25 15:57:37.601547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.698 [2024-07-25 15:57:37.601573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.698 #36 NEW cov: 12243 ft: 14693 corp: 22/163b lim: 25 exec/s: 36 rss: 72Mb L: 7/20 MS: 1 ChangeBinInt- 00:09:19.698 [2024-07-25 15:57:37.672046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.698 [2024-07-25 15:57:37.672074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.955 #37 NEW cov: 12243 ft: 14728 corp: 23/170b lim: 25 exec/s: 37 rss: 72Mb L: 7/20 MS: 1 ChangeByte- 00:09:19.955 [2024-07-25 15:57:37.722481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.955 [2024-07-25 15:57:37.722511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.955 #38 NEW cov: 12243 ft: 14766 corp: 24/177b lim: 25 exec/s: 38 rss: 72Mb L: 7/20 MS: 1 ChangeBit- 00:09:19.955 [2024-07-25 15:57:37.793862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.955 [2024-07-25 15:57:37.793893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.955 [2024-07-25 15:57:37.793970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:19.955 [2024-07-25 15:57:37.793986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:19.955 [2024-07-25 15:57:37.794068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:19.955 [2024-07-25 15:57:37.794086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:19.955 [2024-07-25 15:57:37.794174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:19.955 [2024-07-25 15:57:37.794191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:19.955 #39 NEW cov: 12243 ft: 14778 corp: 25/200b lim: 25 exec/s: 39 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:09:19.955 [2024-07-25 15:57:37.844142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.955 [2024-07-25 15:57:37.844174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.955 [2024-07-25 15:57:37.844256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:19.955 [2024-07-25 15:57:37.844271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:19.955 [2024-07-25 15:57:37.844348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:19.955 [2024-07-25 15:57:37.844363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:19.955 [2024-07-25 15:57:37.844464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:19.955 [2024-07-25 15:57:37.844480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:19.955 #40 NEW cov: 12243 ft: 14790 corp: 26/220b lim: 25 exec/s: 40 rss: 72Mb L: 20/23 MS: 1 CopyPart- 00:09:19.955 [2024-07-25 15:57:37.913603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:19.955 [2024-07-25 15:57:37.913630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.955 #41 NEW cov: 12243 ft: 14802 corp: 27/226b lim: 25 exec/s: 41 rss: 72Mb L: 6/23 MS: 1 ChangeBit- 00:09:20.213 [2024-07-25 15:57:37.964257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.213 [2024-07-25 15:57:37.964283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.213 [2024-07-25 15:57:37.964354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:20.213 [2024-07-25 15:57:37.964372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.213 #42 NEW cov: 12243 ft: 15077 corp: 28/239b lim: 25 exec/s: 42 rss: 72Mb L: 13/23 MS: 1 CrossOver- 00:09:20.213 [2024-07-25 15:57:38.034486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.213 [2024-07-25 15:57:38.034510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.213 #43 NEW cov: 12243 ft: 15085 corp: 29/248b lim: 25 exec/s: 43 rss: 72Mb L: 9/23 MS: 1 InsertRepeatedBytes- 00:09:20.213 [2024-07-25 15:57:38.084637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.213 [2024-07-25 15:57:38.084662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.213 #44 NEW cov: 12243 ft: 15095 corp: 30/255b lim: 25 exec/s: 44 rss: 72Mb L: 7/23 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:20.213 [2024-07-25 15:57:38.135278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.213 [2024-07-25 15:57:38.135305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.213 [2024-07-25 15:57:38.135370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:20.213 [2024-07-25 15:57:38.135384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.213 #45 NEW cov: 12243 ft: 15103 corp: 31/266b lim: 25 exec/s: 45 rss: 73Mb L: 11/23 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:20.213 [2024-07-25 15:57:38.195794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.213 [2024-07-25 15:57:38.195818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.214 [2024-07-25 15:57:38.195904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:20.214 [2024-07-25 15:57:38.195919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.472 #46 NEW cov: 12243 ft: 15110 corp: 32/277b lim: 25 exec/s: 46 rss: 73Mb L: 11/23 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:20.472 [2024-07-25 15:57:38.245851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.472 [2024-07-25 15:57:38.245875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.472 #47 NEW cov: 12243 ft: 15134 corp: 33/284b lim: 25 exec/s: 47 rss: 73Mb L: 7/23 MS: 1 InsertByte- 00:09:20.472 [2024-07-25 15:57:38.296087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.472 [2024-07-25 15:57:38.296110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.472 #48 NEW cov: 12243 ft: 15140 corp: 34/292b lim: 25 exec/s: 48 rss: 73Mb L: 8/23 MS: 1 ChangeByte- 00:09:20.472 [2024-07-25 15:57:38.346504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:20.472 [2024-07-25 15:57:38.346529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.472 #49 NEW cov: 12243 ft: 15224 corp: 35/298b lim: 25 exec/s: 24 rss: 73Mb L: 6/23 MS: 1 EraseBytes- 00:09:20.472 #49 DONE cov: 12243 ft: 15224 corp: 35/298b lim: 25 exec/s: 24 rss: 73Mb 00:09:20.472 ###### Recommended dictionary. ###### 00:09:20.472 "\010\000\000\000" # Uses: 4 00:09:20.472 ###### End of recommended dictionary. ###### 00:09:20.472 Done 49 runs in 2 second(s) 00:09:20.730 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:09:20.730 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:20.730 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:20.730 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:20.730 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:20.731 15:57:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:09:20.731 [2024-07-25 15:57:38.538091] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:20.731 [2024-07-25 15:57:38.538165] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175509 ] 00:09:20.731 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.731 [2024-07-25 15:57:38.712521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.989 [2024-07-25 15:57:38.777784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.989 [2024-07-25 15:57:38.836140] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.989 [2024-07-25 15:57:38.852345] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:20.989 INFO: Running with entropic power schedule (0xFF, 100). 00:09:20.989 INFO: Seed: 3169971391 00:09:20.989 INFO: Loaded 1 modules (359117 inline 8-bit counters): 359117 [0x29c894c, 0x2a20419), 00:09:20.989 INFO: Loaded 1 PC tables (359117 PCs): 359117 [0x2a20420,0x2f9b0f0), 00:09:20.989 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:20.989 INFO: A corpus is not provided, starting from an empty corpus 00:09:20.989 #2 INITED exec/s: 0 rss: 63Mb 00:09:20.989 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:20.989 This may also happen if the target rejected all inputs we tried so far 00:09:20.989 [2024-07-25 15:57:38.907836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.989 [2024-07-25 15:57:38.907865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.989 [2024-07-25 15:57:38.907910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.989 [2024-07-25 15:57:38.907923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.989 [2024-07-25 15:57:38.907972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.990 [2024-07-25 15:57:38.907985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.248 NEW_FUNC[1/702]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:21.248 NEW_FUNC[2/702]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:21.248 #9 NEW cov: 12088 ft: 12086 corp: 2/67b lim: 100 exec/s: 0 rss: 71Mb L: 66/66 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:21.248 [2024-07-25 15:57:39.068625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.248 [2024-07-25 15:57:39.068666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.248 [2024-07-25 15:57:39.068730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.248 [2024-07-25 15:57:39.068748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.248 [2024-07-25 15:57:39.068816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.248 [2024-07-25 15:57:39.068833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.248 #10 NEW cov: 12201 ft: 12643 corp: 3/133b lim: 100 exec/s: 0 rss: 71Mb L: 66/66 MS: 1 ShuffleBytes- 00:09:21.248 [2024-07-25 15:57:39.128500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9370445469731422850 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.248 [2024-07-25 15:57:39.128528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.248 [2024-07-25 15:57:39.128574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.248 [2024-07-25 15:57:39.128588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.248 [2024-07-25 15:57:39.128642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.248 [2024-07-25 15:57:39.128656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.249 #11 NEW cov: 12207 ft: 12944 corp: 4/200b lim: 100 exec/s: 0 rss: 71Mb L: 67/67 MS: 1 CrossOver- 00:09:21.249 [2024-07-25 15:57:39.168787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.249 [2024-07-25 15:57:39.168814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.249 [2024-07-25 15:57:39.168871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.249 [2024-07-25 15:57:39.168885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.249 [2024-07-25 15:57:39.168937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.249 [2024-07-25 15:57:39.168951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.249 [2024-07-25 15:57:39.169005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.249 [2024-07-25 15:57:39.169018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.249 #12 NEW cov: 12292 ft: 13567 corp: 5/290b lim: 100 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CopyPart- 00:09:21.249 [2024-07-25 15:57:39.208706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.249 [2024-07-25 15:57:39.208734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.249 [2024-07-25 15:57:39.208804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.249 [2024-07-25 15:57:39.208828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.249 [2024-07-25 15:57:39.208881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.249 [2024-07-25 15:57:39.208894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.508 #13 NEW cov: 12292 ft: 13633 corp: 6/356b lim: 100 exec/s: 0 rss: 72Mb L: 66/90 MS: 1 ChangeBinInt- 00:09:21.508 [2024-07-25 15:57:39.258871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.258895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.258960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.258973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.259029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.259043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.508 #14 NEW cov: 12292 ft: 13707 corp: 7/422b lim: 100 exec/s: 0 rss: 72Mb L: 66/90 MS: 1 ShuffleBytes- 00:09:21.508 [2024-07-25 15:57:39.298969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.298994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.299053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.299065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.299120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.299134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.508 #15 NEW cov: 12292 ft: 13830 corp: 8/488b lim: 100 exec/s: 0 rss: 72Mb L: 66/90 MS: 1 CMP- DE: "\001\037"- 00:09:21.508 [2024-07-25 15:57:39.339223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.339246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.339299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.339309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.339382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.339395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.339452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.339465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.508 #16 NEW cov: 12292 ft: 13852 corp: 9/578b lim: 100 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CopyPart- 00:09:21.508 [2024-07-25 15:57:39.389019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222467876225666 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.389044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.389082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.389094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.508 #17 NEW cov: 12292 ft: 14193 corp: 10/625b lim: 100 exec/s: 0 rss: 72Mb L: 47/90 MS: 1 CrossOver- 00:09:21.508 [2024-07-25 15:57:39.429280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.429305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.429360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.429374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.429431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.429443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.508 #18 NEW cov: 12292 ft: 14262 corp: 11/692b lim: 100 exec/s: 0 rss: 72Mb L: 67/90 MS: 1 InsertByte- 00:09:21.508 [2024-07-25 15:57:39.479587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.479611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.479666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404078974092608130 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.479678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.479732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.479745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.508 [2024-07-25 15:57:39.479802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.508 [2024-07-25 15:57:39.479815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.767 #19 NEW cov: 12292 ft: 14292 corp: 12/782b lim: 100 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:21.767 [2024-07-25 15:57:39.519580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.519604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.767 [2024-07-25 15:57:39.519650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.519662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.767 [2024-07-25 15:57:39.519717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2270520659195101697 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.519729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.767 #20 NEW cov: 12292 ft: 14350 corp: 13/849b lim: 100 exec/s: 0 rss: 72Mb L: 67/90 MS: 1 CopyPart- 00:09:21.767 [2024-07-25 15:57:39.569546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.569572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.767 [2024-07-25 15:57:39.569611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.569624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.767 #21 NEW cov: 12292 ft: 14420 corp: 14/898b lim: 100 exec/s: 0 rss: 72Mb L: 49/90 MS: 1 InsertRepeatedBytes- 00:09:21.767 [2024-07-25 15:57:39.609843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.609867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.767 [2024-07-25 15:57:39.609941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446743536838639615 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.609955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.767 [2024-07-25 15:57:39.610010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404223005820879490 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.610023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.767 #22 NEW cov: 12292 ft: 14442 corp: 15/967b lim: 100 exec/s: 0 rss: 72Mb L: 69/90 MS: 1 CrossOver- 00:09:21.767 [2024-07-25 15:57:39.659635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10055284022328396683 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.659660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.767 #28 NEW cov: 12292 ft: 15319 corp: 16/987b lim: 100 exec/s: 0 rss: 72Mb L: 20/90 MS: 1 InsertRepeatedBytes- 00:09:21.767 [2024-07-25 15:57:39.700084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9370445469731422850 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.700109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.767 [2024-07-25 15:57:39.700153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.700166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.767 [2024-07-25 15:57:39.700220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222469084185218 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.700232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.767 #29 NEW cov: 12292 ft: 15367 corp: 17/1054b lim: 100 exec/s: 0 rss: 72Mb L: 67/90 MS: 1 ChangeBit- 00:09:21.767 [2024-07-25 15:57:39.750371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.767 [2024-07-25 15:57:39.750395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.768 [2024-07-25 15:57:39.750448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404079248970515074 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.768 [2024-07-25 15:57:39.750460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.768 [2024-07-25 15:57:39.750514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.768 [2024-07-25 15:57:39.750527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.768 [2024-07-25 15:57:39.750582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.768 [2024-07-25 15:57:39.750594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.027 NEW_FUNC[1/1]: 0x1a8aef0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:22.027 #30 NEW cov: 12315 ft: 15386 corp: 18/1144b lim: 100 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 ChangeBit- 00:09:22.027 [2024-07-25 15:57:39.810386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.810411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.810467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.810480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.810536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:80926815362908802 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.810549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.027 #31 NEW cov: 12315 ft: 15426 corp: 19/1212b lim: 100 exec/s: 0 rss: 73Mb L: 68/90 MS: 1 InsertByte- 00:09:22.027 [2024-07-25 15:57:39.860520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.860545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.860605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.860620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.860691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.860705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.027 #32 NEW cov: 12315 ft: 15434 corp: 20/1283b lim: 100 exec/s: 0 rss: 73Mb L: 71/90 MS: 1 InsertRepeatedBytes- 00:09:22.027 [2024-07-25 15:57:39.900674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9370445469731422850 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.900698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.900753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.900769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.900823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.900836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.027 #33 NEW cov: 12315 ft: 15445 corp: 21/1350b lim: 100 exec/s: 33 rss: 73Mb L: 67/90 MS: 1 CrossOver- 00:09:22.027 [2024-07-25 15:57:39.940791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9370445469731422850 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.940815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.940868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.940878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.940933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:80926815362908802 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.940946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.027 #34 NEW cov: 12315 ft: 15446 corp: 22/1417b lim: 100 exec/s: 34 rss: 73Mb L: 67/90 MS: 1 PersAutoDict- DE: "\001\037"- 00:09:22.027 [2024-07-25 15:57:39.981064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.981090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.981143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404078974092608130 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.981154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.981207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.981220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.027 [2024-07-25 15:57:39.981275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222466760409730 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.027 [2024-07-25 15:57:39.981290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.027 #35 NEW cov: 12315 ft: 15478 corp: 23/1512b lim: 100 exec/s: 35 rss: 73Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:09:22.286 [2024-07-25 15:57:40.022089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:288 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.286 [2024-07-25 15:57:40.022127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.286 [2024-07-25 15:57:40.022194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.286 [2024-07-25 15:57:40.022213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.286 [2024-07-25 15:57:40.022279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.286 [2024-07-25 15:57:40.022296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.286 #36 NEW cov: 12315 ft: 15567 corp: 24/1580b lim: 100 exec/s: 36 rss: 73Mb L: 68/95 MS: 1 PersAutoDict- DE: "\001\037"- 00:09:22.286 [2024-07-25 15:57:40.073137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:288 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.286 [2024-07-25 15:57:40.073177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.286 [2024-07-25 15:57:40.073260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.286 [2024-07-25 15:57:40.073277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.287 [2024-07-25 15:57:40.073349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.073364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.287 #37 NEW cov: 12315 ft: 15713 corp: 25/1648b lim: 100 exec/s: 37 rss: 73Mb L: 68/95 MS: 1 ChangeBinInt- 00:09:22.287 [2024-07-25 15:57:40.143316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9370445469731422850 len:9091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.143349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.287 [2024-07-25 15:57:40.143434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.143455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.287 [2024-07-25 15:57:40.143517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.143534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.287 #38 NEW cov: 12315 ft: 15737 corp: 26/1715b lim: 100 exec/s: 38 rss: 73Mb L: 67/95 MS: 1 ChangeByte- 00:09:22.287 [2024-07-25 15:57:40.213539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.213568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.287 [2024-07-25 15:57:40.213640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.213660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.287 [2024-07-25 15:57:40.213743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2270520659195101697 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.213765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.287 #39 NEW cov: 12315 ft: 15793 corp: 27/1783b lim: 100 exec/s: 39 rss: 73Mb L: 68/95 MS: 1 InsertByte- 00:09:22.287 [2024-07-25 15:57:40.263359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.263388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.287 [2024-07-25 15:57:40.263478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.287 [2024-07-25 15:57:40.263496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.545 #40 NEW cov: 12315 ft: 15819 corp: 28/1833b lim: 100 exec/s: 40 rss: 73Mb L: 50/95 MS: 1 InsertByte- 00:09:22.545 [2024-07-25 15:57:40.314442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.314472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.545 [2024-07-25 15:57:40.314553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.314572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.545 [2024-07-25 15:57:40.314644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.314664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.545 [2024-07-25 15:57:40.314770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.314788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.545 #41 NEW cov: 12315 ft: 15831 corp: 29/1931b lim: 100 exec/s: 41 rss: 73Mb L: 98/98 MS: 1 CopyPart- 00:09:22.545 [2024-07-25 15:57:40.364237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9370445469731422850 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.364269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.545 [2024-07-25 15:57:40.364354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.364376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.545 [2024-07-25 15:57:40.364475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.364498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.545 #42 NEW cov: 12315 ft: 15860 corp: 30/1998b lim: 100 exec/s: 42 rss: 73Mb L: 67/98 MS: 1 CopyPart- 00:09:22.545 [2024-07-25 15:57:40.413694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:10055284022328396683 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.545 [2024-07-25 15:57:40.413724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.545 #43 NEW cov: 12315 ft: 15887 corp: 31/2018b lim: 100 exec/s: 43 rss: 74Mb L: 20/98 MS: 1 ChangeBit- 00:09:22.545 [2024-07-25 15:57:40.485217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.546 [2024-07-25 15:57:40.485247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.546 [2024-07-25 15:57:40.485348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9379452670999429762 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.546 [2024-07-25 15:57:40.485370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.546 [2024-07-25 15:57:40.485457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.546 [2024-07-25 15:57:40.485472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.546 [2024-07-25 15:57:40.485586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.546 [2024-07-25 15:57:40.485608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.546 #44 NEW cov: 12315 ft: 15913 corp: 32/2117b lim: 100 exec/s: 44 rss: 74Mb L: 99/99 MS: 1 InsertByte- 00:09:22.804 [2024-07-25 15:57:40.555503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.555537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.555623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.555644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.555713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.555731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.555844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949946242 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.555863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.804 #45 NEW cov: 12315 ft: 15927 corp: 33/2216b lim: 100 exec/s: 45 rss: 74Mb L: 99/99 MS: 1 InsertByte- 00:09:22.804 [2024-07-25 15:57:40.615453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.615483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.615566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.615594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.615660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744071612399615 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.615677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.804 #46 NEW cov: 12315 ft: 15947 corp: 34/2282b lim: 100 exec/s: 46 rss: 74Mb L: 66/99 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:09:22.804 [2024-07-25 15:57:40.686125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.686154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.686255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9379452670999429762 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.686275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.686365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.686385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.686496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.686515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.804 #47 NEW cov: 12315 ft: 16025 corp: 35/2381b lim: 100 exec/s: 47 rss: 74Mb L: 99/99 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:09:22.804 [2024-07-25 15:57:40.756408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9404222466936701570 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.756434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.756540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9379452670999429762 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.756558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.756634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404360444774351490 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.756653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.804 [2024-07-25 15:57:40.756754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9404222468944200322 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.804 [2024-07-25 15:57:40.756774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.804 #48 NEW cov: 12315 ft: 16027 corp: 36/2480b lim: 100 exec/s: 48 rss: 74Mb L: 99/99 MS: 1 CopyPart- 00:09:23.063 [2024-07-25 15:57:40.815564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3209812588725242763 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.063 [2024-07-25 15:57:40.815591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.063 #49 NEW cov: 12315 ft: 16029 corp: 37/2500b lim: 100 exec/s: 49 rss: 74Mb L: 20/99 MS: 1 ChangeByte- 00:09:23.063 [2024-07-25 15:57:40.866457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9370445469731422850 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.063 [2024-07-25 15:57:40.866482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.063 [2024-07-25 15:57:40.866573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9437999466155246210 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.063 [2024-07-25 15:57:40.866589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.063 [2024-07-25 15:57:40.866677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404222468949967490 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.063 [2024-07-25 15:57:40.866692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.063 #50 NEW cov: 12315 ft: 16051 corp: 38/2567b lim: 100 exec/s: 50 rss: 74Mb L: 67/99 MS: 1 ChangeByte- 00:09:23.063 [2024-07-25 15:57:40.916592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.063 [2024-07-25 15:57:40.916620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.063 [2024-07-25 15:57:40.916705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446743536838639615 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.063 [2024-07-25 15:57:40.916723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.063 [2024-07-25 15:57:40.916799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9404223005820879490 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.063 [2024-07-25 15:57:40.916814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.063 #51 NEW cov: 12315 ft: 16057 corp: 39/2636b lim: 100 exec/s: 25 rss: 74Mb L: 69/99 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:09:23.063 #51 DONE cov: 12315 ft: 16057 corp: 39/2636b lim: 100 exec/s: 25 rss: 74Mb 00:09:23.063 ###### Recommended dictionary. ###### 00:09:23.063 "\001\037" # Uses: 2 00:09:23.063 "\000\000\000\000" # Uses: 1 00:09:23.063 "\377\377\377\377\377\377\377\000" # Uses: 1 00:09:23.063 ###### End of recommended dictionary. ###### 00:09:23.063 Done 51 runs in 2 second(s) 00:09:23.324 15:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:09:23.324 15:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:23.324 15:57:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:23.324 15:57:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:09:23.324 00:09:23.324 real 1m4.072s 00:09:23.324 user 1m45.486s 00:09:23.324 sys 0m6.448s 00:09:23.324 15:57:41 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.324 15:57:41 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:23.324 ************************************ 00:09:23.324 END TEST nvmf_llvm_fuzz 00:09:23.324 ************************************ 00:09:23.324 15:57:41 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:09:23.324 15:57:41 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:09:23.324 15:57:41 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:23.324 15:57:41 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.324 15:57:41 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.324 15:57:41 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:23.324 ************************************ 00:09:23.324 START TEST vfio_llvm_fuzz 00:09:23.324 ************************************ 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:23.324 * Looking for test storage... 00:09:23.324 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:23.324 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:23.325 #define SPDK_CONFIG_H 00:09:23.325 #define SPDK_CONFIG_APPS 1 00:09:23.325 #define SPDK_CONFIG_ARCH native 00:09:23.325 #undef SPDK_CONFIG_ASAN 00:09:23.325 #undef SPDK_CONFIG_AVAHI 00:09:23.325 #undef SPDK_CONFIG_CET 00:09:23.325 #define SPDK_CONFIG_COVERAGE 1 00:09:23.325 #define SPDK_CONFIG_CROSS_PREFIX 00:09:23.325 #undef SPDK_CONFIG_CRYPTO 00:09:23.325 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:23.325 #undef SPDK_CONFIG_CUSTOMOCF 00:09:23.325 #undef SPDK_CONFIG_DAOS 00:09:23.325 #define SPDK_CONFIG_DAOS_DIR 00:09:23.325 #define SPDK_CONFIG_DEBUG 1 00:09:23.325 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:23.325 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:23.325 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:23.325 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:23.325 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:23.325 #undef SPDK_CONFIG_DPDK_UADK 00:09:23.325 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:23.325 #define SPDK_CONFIG_EXAMPLES 1 00:09:23.325 #undef SPDK_CONFIG_FC 00:09:23.325 #define SPDK_CONFIG_FC_PATH 00:09:23.325 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:23.325 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:23.325 #undef SPDK_CONFIG_FUSE 00:09:23.325 #define SPDK_CONFIG_FUZZER 1 00:09:23.325 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:23.325 #undef SPDK_CONFIG_GOLANG 00:09:23.325 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:23.325 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:23.325 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:23.325 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:23.325 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:23.325 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:23.325 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:23.325 #define SPDK_CONFIG_IDXD 1 00:09:23.325 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:23.325 #undef SPDK_CONFIG_IPSEC_MB 00:09:23.325 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:23.325 #define SPDK_CONFIG_ISAL 1 00:09:23.325 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:23.325 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:23.325 #define SPDK_CONFIG_LIBDIR 00:09:23.325 #undef SPDK_CONFIG_LTO 00:09:23.325 #define SPDK_CONFIG_MAX_LCORES 128 00:09:23.325 #define SPDK_CONFIG_NVME_CUSE 1 00:09:23.325 #undef SPDK_CONFIG_OCF 00:09:23.325 #define SPDK_CONFIG_OCF_PATH 00:09:23.325 #define SPDK_CONFIG_OPENSSL_PATH 00:09:23.325 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:23.325 #define SPDK_CONFIG_PGO_DIR 00:09:23.325 #undef SPDK_CONFIG_PGO_USE 00:09:23.325 #define SPDK_CONFIG_PREFIX /usr/local 00:09:23.325 #undef SPDK_CONFIG_RAID5F 00:09:23.325 #undef SPDK_CONFIG_RBD 00:09:23.325 #define SPDK_CONFIG_RDMA 1 00:09:23.325 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:23.325 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:23.325 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:23.325 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:23.325 #undef SPDK_CONFIG_SHARED 00:09:23.325 #undef SPDK_CONFIG_SMA 00:09:23.325 #define SPDK_CONFIG_TESTS 1 00:09:23.325 #undef SPDK_CONFIG_TSAN 00:09:23.325 #define SPDK_CONFIG_UBLK 1 00:09:23.325 #define SPDK_CONFIG_UBSAN 1 00:09:23.325 #undef SPDK_CONFIG_UNIT_TESTS 00:09:23.325 #undef SPDK_CONFIG_URING 00:09:23.325 #define SPDK_CONFIG_URING_PATH 00:09:23.325 #undef SPDK_CONFIG_URING_ZNS 00:09:23.325 #undef SPDK_CONFIG_USDT 00:09:23.325 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:23.325 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:23.325 #define SPDK_CONFIG_VFIO_USER 1 00:09:23.325 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:23.325 #define SPDK_CONFIG_VHOST 1 00:09:23.325 #define SPDK_CONFIG_VIRTIO 1 00:09:23.325 #undef SPDK_CONFIG_VTUNE 00:09:23.325 #define SPDK_CONFIG_VTUNE_DIR 00:09:23.325 #define SPDK_CONFIG_WERROR 1 00:09:23.325 #define SPDK_CONFIG_WPDK_DIR 00:09:23.325 #undef SPDK_CONFIG_XNVME 00:09:23.325 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.325 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:23.326 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:23.327 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # cat 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export valgrind= 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # valgrind= 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # uname -s 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@281 -- # MAKE=make 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j88 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # [[ -z 175973 ]] 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@320 -- # kill -0 175973 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.J2gdSt 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.J2gdSt/tests/vfio /tmp/spdk.J2gdSt 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # df -T 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=948682752 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4335747072 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=54608584704 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742047232 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=7133462528 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30867648512 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=12342374400 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348411904 00:09:23.587 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=6037504 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=30870716416 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871023616 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=307200 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # avails["$mount"]=6174199808 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174203904 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:23.588 * Looking for test storage... 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mount=/ 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # target_space=54608584704 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # new_size=9348055040 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:23.588 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # return 0 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:23.588 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:23.588 15:57:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:23.588 [2024-07-25 15:57:41.415072] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:23.588 [2024-07-25 15:57:41.415127] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176012 ] 00:09:23.588 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.588 [2024-07-25 15:57:41.488141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.588 [2024-07-25 15:57:41.561601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.847 INFO: Running with entropic power schedule (0xFF, 100). 00:09:23.847 INFO: Seed: 1753014437 00:09:23.847 INFO: Loaded 1 modules (356353 inline 8-bit counters): 356353 [0x298918c, 0x29e018d), 00:09:23.847 INFO: Loaded 1 PC tables (356353 PCs): 356353 [0x29e0190,0x2f501a0), 00:09:23.847 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:23.847 INFO: A corpus is not provided, starting from an empty corpus 00:09:23.847 #2 INITED exec/s: 0 rss: 66Mb 00:09:23.847 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:23.847 This may also happen if the target rejected all inputs we tried so far 00:09:23.847 [2024-07-25 15:57:41.803270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:09:24.106 NEW_FUNC[1/659]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:09:24.106 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:24.106 #14 NEW cov: 10979 ft: 10486 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:24.365 #15 NEW cov: 10993 ft: 13766 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:24.365 #26 NEW cov: 10993 ft: 14936 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:24.623 #27 NEW cov: 10993 ft: 15752 corp: 5/25b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:24.623 #28 NEW cov: 10993 ft: 16719 corp: 6/31b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:09:24.882 NEW_FUNC[1/1]: 0x1a57420 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:24.882 #29 NEW cov: 11010 ft: 16824 corp: 7/37b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:09:24.882 #30 NEW cov: 11010 ft: 16968 corp: 8/43b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:24.882 #31 NEW cov: 11010 ft: 17088 corp: 9/49b lim: 6 exec/s: 31 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:25.141 #32 NEW cov: 11010 ft: 17244 corp: 10/55b lim: 6 exec/s: 32 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:25.141 #33 NEW cov: 11010 ft: 17554 corp: 11/61b lim: 6 exec/s: 33 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:09:25.400 #34 NEW cov: 11010 ft: 17778 corp: 12/67b lim: 6 exec/s: 34 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:25.400 #35 NEW cov: 11020 ft: 17887 corp: 13/73b lim: 6 exec/s: 35 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:25.658 #36 NEW cov: 11020 ft: 18023 corp: 14/79b lim: 6 exec/s: 36 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:09:25.658 #37 NEW cov: 11027 ft: 18100 corp: 15/85b lim: 6 exec/s: 37 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:09:25.916 #43 NEW cov: 11027 ft: 18578 corp: 16/91b lim: 6 exec/s: 43 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:09:25.917 #49 NEW cov: 11027 ft: 18687 corp: 17/97b lim: 6 exec/s: 24 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:09:25.917 #49 DONE cov: 11027 ft: 18687 corp: 17/97b lim: 6 exec/s: 24 rss: 74Mb 00:09:25.917 Done 49 runs in 2 second(s) 00:09:25.917 [2024-07-25 15:57:43.840950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:09:26.176 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:26.176 15:57:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:09:26.176 [2024-07-25 15:57:44.129611] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:26.176 [2024-07-25 15:57:44.129695] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176451 ] 00:09:26.176 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.435 [2024-07-25 15:57:44.204178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.435 [2024-07-25 15:57:44.275664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.694 INFO: Running with entropic power schedule (0xFF, 100). 00:09:26.694 INFO: Seed: 167052587 00:09:26.694 INFO: Loaded 1 modules (356353 inline 8-bit counters): 356353 [0x298918c, 0x29e018d), 00:09:26.694 INFO: Loaded 1 PC tables (356353 PCs): 356353 [0x29e0190,0x2f501a0), 00:09:26.694 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:26.694 INFO: A corpus is not provided, starting from an empty corpus 00:09:26.694 #2 INITED exec/s: 0 rss: 66Mb 00:09:26.694 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:26.694 This may also happen if the target rejected all inputs we tried so far 00:09:26.694 [2024-07-25 15:57:44.510238] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:09:26.694 [2024-07-25 15:57:44.537790] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:26.694 [2024-07-25 15:57:44.537816] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:26.694 [2024-07-25 15:57:44.537831] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:26.954 NEW_FUNC[1/661]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:09:26.954 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:26.954 #10 NEW cov: 10968 ft: 10763 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 3 ShuffleBytes-CopyPart-CopyPart- 00:09:26.954 [2024-07-25 15:57:44.787501] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:26.954 [2024-07-25 15:57:44.787533] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:26.954 [2024-07-25 15:57:44.787548] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:26.954 #16 NEW cov: 10989 ft: 13735 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 ChangeByte- 00:09:26.954 [2024-07-25 15:57:44.913632] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:26.954 [2024-07-25 15:57:44.913656] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:26.954 [2024-07-25 15:57:44.913671] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.213 #17 NEW cov: 10989 ft: 15954 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:27.213 [2024-07-25 15:57:45.038557] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.213 [2024-07-25 15:57:45.038582] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.213 [2024-07-25 15:57:45.038597] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.213 #28 NEW cov: 10989 ft: 16471 corp: 5/17b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:09:27.213 [2024-07-25 15:57:45.153470] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.213 [2024-07-25 15:57:45.153495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.213 [2024-07-25 15:57:45.153510] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.471 #29 NEW cov: 10989 ft: 16730 corp: 6/21b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:09:27.471 [2024-07-25 15:57:45.267417] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.471 [2024-07-25 15:57:45.267441] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.471 [2024-07-25 15:57:45.267455] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.471 NEW_FUNC[1/1]: 0x1a57420 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:27.471 #30 NEW cov: 11006 ft: 16799 corp: 7/25b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:27.471 [2024-07-25 15:57:45.382469] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.471 [2024-07-25 15:57:45.382493] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.471 [2024-07-25 15:57:45.382508] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.471 #31 NEW cov: 11006 ft: 16936 corp: 8/29b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:09:27.730 [2024-07-25 15:57:45.506387] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.730 [2024-07-25 15:57:45.506411] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.730 [2024-07-25 15:57:45.506424] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.730 #32 NEW cov: 11006 ft: 17423 corp: 9/33b lim: 4 exec/s: 32 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:09:27.730 [2024-07-25 15:57:45.631391] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.730 [2024-07-25 15:57:45.631415] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.730 [2024-07-25 15:57:45.631429] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.730 #38 NEW cov: 11006 ft: 17576 corp: 10/37b lim: 4 exec/s: 38 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:27.990 [2024-07-25 15:57:45.756390] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.990 [2024-07-25 15:57:45.756415] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.990 [2024-07-25 15:57:45.756430] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.990 #39 NEW cov: 11006 ft: 17619 corp: 11/41b lim: 4 exec/s: 39 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:09:27.990 [2024-07-25 15:57:45.871429] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:27.990 [2024-07-25 15:57:45.871455] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:27.990 [2024-07-25 15:57:45.871469] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:27.990 #40 NEW cov: 11006 ft: 17946 corp: 12/45b lim: 4 exec/s: 40 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:09:28.248 [2024-07-25 15:57:45.987490] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:28.249 [2024-07-25 15:57:45.987519] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:28.249 [2024-07-25 15:57:45.987535] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:28.249 #41 NEW cov: 11006 ft: 17958 corp: 13/49b lim: 4 exec/s: 41 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:09:28.249 [2024-07-25 15:57:46.102389] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:28.249 [2024-07-25 15:57:46.102415] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:28.249 [2024-07-25 15:57:46.102430] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:28.249 #42 NEW cov: 11006 ft: 17980 corp: 14/53b lim: 4 exec/s: 42 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:09:28.249 [2024-07-25 15:57:46.217176] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:28.249 [2024-07-25 15:57:46.217200] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:28.249 [2024-07-25 15:57:46.217215] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:28.507 #43 NEW cov: 11013 ft: 18053 corp: 15/57b lim: 4 exec/s: 43 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:09:28.507 [2024-07-25 15:57:46.333206] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:28.507 [2024-07-25 15:57:46.333230] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:28.507 [2024-07-25 15:57:46.333245] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:28.507 #44 NEW cov: 11013 ft: 18249 corp: 16/61b lim: 4 exec/s: 44 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:09:28.507 [2024-07-25 15:57:46.447222] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:28.507 [2024-07-25 15:57:46.447245] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:28.507 [2024-07-25 15:57:46.447259] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:28.766 #45 NEW cov: 11013 ft: 18481 corp: 17/65b lim: 4 exec/s: 22 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:09:28.766 #45 DONE cov: 11013 ft: 18481 corp: 17/65b lim: 4 exec/s: 22 rss: 74Mb 00:09:28.766 Done 45 runs in 2 second(s) 00:09:28.766 [2024-07-25 15:57:46.542946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:09:29.025 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:29.025 15:57:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:09:29.025 [2024-07-25 15:57:46.827560] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:29.025 [2024-07-25 15:57:46.827638] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176885 ] 00:09:29.025 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.025 [2024-07-25 15:57:46.901878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.025 [2024-07-25 15:57:46.972826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.284 INFO: Running with entropic power schedule (0xFF, 100). 00:09:29.284 INFO: Seed: 2873033092 00:09:29.284 INFO: Loaded 1 modules (356353 inline 8-bit counters): 356353 [0x298918c, 0x29e018d), 00:09:29.284 INFO: Loaded 1 PC tables (356353 PCs): 356353 [0x29e0190,0x2f501a0), 00:09:29.284 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:29.284 INFO: A corpus is not provided, starting from an empty corpus 00:09:29.284 #2 INITED exec/s: 0 rss: 66Mb 00:09:29.284 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:29.284 This may also happen if the target rejected all inputs we tried so far 00:09:29.284 [2024-07-25 15:57:47.212104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:09:29.543 [2024-07-25 15:57:47.292414] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:29.543 NEW_FUNC[1/660]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:09:29.543 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:29.543 #9 NEW cov: 10959 ft: 10890 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 2 InsertRepeatedBytes-InsertByte- 00:09:29.802 [2024-07-25 15:57:47.610949] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:29.802 #14 NEW cov: 10976 ft: 13703 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 EraseBytes-ChangeBit-InsertByte-CrossOver-CopyPart- 00:09:29.802 [2024-07-25 15:57:47.791499] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:30.060 #20 NEW cov: 10976 ft: 14889 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:09:30.060 [2024-07-25 15:57:47.972449] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:30.319 NEW_FUNC[1/1]: 0x1a57420 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:30.319 #26 NEW cov: 10993 ft: 15027 corp: 5/33b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:30.319 [2024-07-25 15:57:48.158774] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:30.319 #37 NEW cov: 10993 ft: 15433 corp: 6/41b lim: 8 exec/s: 37 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:09:30.577 [2024-07-25 15:57:48.342692] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:30.577 #38 NEW cov: 10993 ft: 15900 corp: 7/49b lim: 8 exec/s: 38 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:09:30.578 [2024-07-25 15:57:48.530767] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:30.836 #39 NEW cov: 10993 ft: 15960 corp: 8/57b lim: 8 exec/s: 39 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:09:30.836 [2024-07-25 15:57:48.717807] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:09:30.836 [2024-07-25 15:57:48.717840] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:09:31.094 NEW_FUNC[1/1]: 0x13ecb10 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3094 00:09:31.094 #40 NEW cov: 11003 ft: 17215 corp: 9/65b lim: 8 exec/s: 40 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:09:31.094 [2024-07-25 15:57:48.917252] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:31.094 #41 NEW cov: 11010 ft: 17384 corp: 10/73b lim: 8 exec/s: 41 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:09:31.353 [2024-07-25 15:57:49.107449] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:31.353 #42 NEW cov: 11010 ft: 17471 corp: 11/81b lim: 8 exec/s: 21 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:31.353 #42 DONE cov: 11010 ft: 17471 corp: 11/81b lim: 8 exec/s: 21 rss: 74Mb 00:09:31.353 Done 42 runs in 2 second(s) 00:09:31.353 [2024-07-25 15:57:49.235955] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:09:31.612 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:31.612 15:57:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:09:31.612 [2024-07-25 15:57:49.517727] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:31.613 [2024-07-25 15:57:49.517816] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177324 ] 00:09:31.613 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.613 [2024-07-25 15:57:49.589949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.872 [2024-07-25 15:57:49.662859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.872 INFO: Running with entropic power schedule (0xFF, 100). 00:09:31.872 INFO: Seed: 1273084030 00:09:32.130 INFO: Loaded 1 modules (356353 inline 8-bit counters): 356353 [0x298918c, 0x29e018d), 00:09:32.130 INFO: Loaded 1 PC tables (356353 PCs): 356353 [0x29e0190,0x2f501a0), 00:09:32.130 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:32.130 INFO: A corpus is not provided, starting from an empty corpus 00:09:32.130 #2 INITED exec/s: 0 rss: 66Mb 00:09:32.130 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:32.130 This may also happen if the target rejected all inputs we tried so far 00:09:32.130 [2024-07-25 15:57:49.910778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:09:32.389 NEW_FUNC[1/659]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:09:32.390 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:32.390 #52 NEW cov: 10969 ft: 10942 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 5 ChangeBinInt-InsertRepeatedBytes-CopyPart-ChangeBinInt-InsertByte- 00:09:32.648 NEW_FUNC[1/1]: 0x1710a80 in nvme_pcie_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_pcie_internal.h:207 00:09:32.648 #58 NEW cov: 10984 ft: 14194 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:09:32.906 NEW_FUNC[1/1]: 0x1a57420 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:32.906 #64 NEW cov: 11001 ft: 14834 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:09:33.165 #65 NEW cov: 11001 ft: 15880 corp: 5/129b lim: 32 exec/s: 65 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:09:33.423 #66 NEW cov: 11001 ft: 16358 corp: 6/161b lim: 32 exec/s: 66 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:09:33.424 #72 NEW cov: 11001 ft: 16567 corp: 7/193b lim: 32 exec/s: 72 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:33.682 #83 NEW cov: 11001 ft: 16589 corp: 8/225b lim: 32 exec/s: 83 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:33.940 #84 NEW cov: 11008 ft: 17031 corp: 9/257b lim: 32 exec/s: 84 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:09:33.941 #85 NEW cov: 11008 ft: 17210 corp: 10/289b lim: 32 exec/s: 42 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:33.941 #85 DONE cov: 11008 ft: 17210 corp: 10/289b lim: 32 exec/s: 42 rss: 73Mb 00:09:33.941 Done 85 runs in 2 second(s) 00:09:33.941 [2024-07-25 15:57:51.916946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:09:34.200 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:34.200 15:57:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:09:34.460 [2024-07-25 15:57:52.198483] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:34.460 [2024-07-25 15:57:52.198552] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177754 ] 00:09:34.460 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.460 [2024-07-25 15:57:52.270689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.460 [2024-07-25 15:57:52.342892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.719 INFO: Running with entropic power schedule (0xFF, 100). 00:09:34.719 INFO: Seed: 3954081001 00:09:34.719 INFO: Loaded 1 modules (356353 inline 8-bit counters): 356353 [0x298918c, 0x29e018d), 00:09:34.719 INFO: Loaded 1 PC tables (356353 PCs): 356353 [0x29e0190,0x2f501a0), 00:09:34.719 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:34.719 INFO: A corpus is not provided, starting from an empty corpus 00:09:34.719 #2 INITED exec/s: 0 rss: 66Mb 00:09:34.719 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:34.719 This may also happen if the target rejected all inputs we tried so far 00:09:34.719 [2024-07-25 15:57:52.591409] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:09:34.978 NEW_FUNC[1/654]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:09:34.978 NEW_FUNC[2/654]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:34.978 #451 NEW cov: 10934 ft: 10906 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 4 CopyPart-InsertRepeatedBytes-ChangeByte-InsertRepeatedBytes- 00:09:35.236 #452 NEW cov: 10948 ft: 13685 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:35.504 #453 NEW cov: 10948 ft: 15138 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:09:35.504 NEW_FUNC[1/1]: 0x1a57420 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:35.504 #454 NEW cov: 10965 ft: 15558 corp: 5/129b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:09:35.765 #455 NEW cov: 10965 ft: 15702 corp: 6/161b lim: 32 exec/s: 455 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:36.024 #456 NEW cov: 10965 ft: 16053 corp: 7/193b lim: 32 exec/s: 456 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:09:36.283 #457 NEW cov: 10965 ft: 16344 corp: 8/225b lim: 32 exec/s: 457 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:09:36.283 #458 NEW cov: 10965 ft: 16849 corp: 9/257b lim: 32 exec/s: 458 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:09:36.541 #459 NEW cov: 10972 ft: 16873 corp: 10/289b lim: 32 exec/s: 459 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:09:36.800 #465 NEW cov: 10972 ft: 17134 corp: 11/321b lim: 32 exec/s: 232 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:09:36.800 #465 DONE cov: 10972 ft: 17134 corp: 11/321b lim: 32 exec/s: 232 rss: 74Mb 00:09:36.800 Done 465 runs in 2 second(s) 00:09:36.800 [2024-07-25 15:57:54.674964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:09:37.060 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:37.060 15:57:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:09:37.060 [2024-07-25 15:57:54.956506] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:37.060 [2024-07-25 15:57:54.956582] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178194 ] 00:09:37.060 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.060 [2024-07-25 15:57:55.027867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.319 [2024-07-25 15:57:55.102098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.319 INFO: Running with entropic power schedule (0xFF, 100). 00:09:37.319 INFO: Seed: 2403107849 00:09:37.319 INFO: Loaded 1 modules (356353 inline 8-bit counters): 356353 [0x298918c, 0x29e018d), 00:09:37.319 INFO: Loaded 1 PC tables (356353 PCs): 356353 [0x29e0190,0x2f501a0), 00:09:37.319 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:37.319 INFO: A corpus is not provided, starting from an empty corpus 00:09:37.319 #2 INITED exec/s: 0 rss: 66Mb 00:09:37.319 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:37.319 This may also happen if the target rejected all inputs we tried so far 00:09:37.577 [2024-07-25 15:57:55.335522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:09:37.577 [2024-07-25 15:57:55.414684] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:37.577 [2024-07-25 15:57:55.414718] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:37.835 NEW_FUNC[1/661]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:09:37.835 NEW_FUNC[2/661]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:37.835 #97 NEW cov: 10978 ft: 10949 corp: 2/14b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 5 ChangeBit-InsertRepeatedBytes-InsertByte-ChangeBinInt-InsertByte- 00:09:37.835 [2024-07-25 15:57:55.728033] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:37.835 [2024-07-25 15:57:55.728073] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:38.094 #98 NEW cov: 10992 ft: 14024 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CopyPart- 00:09:38.094 [2024-07-25 15:57:55.915708] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:38.094 [2024-07-25 15:57:55.915738] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:38.094 #109 NEW cov: 10995 ft: 15812 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeByte- 00:09:38.352 [2024-07-25 15:57:56.099083] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:38.352 [2024-07-25 15:57:56.099112] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:38.352 NEW_FUNC[1/1]: 0x1a57420 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:38.352 #110 NEW cov: 11012 ft: 16471 corp: 5/53b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:09:38.352 [2024-07-25 15:57:56.303192] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:38.352 [2024-07-25 15:57:56.303221] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:38.611 #121 NEW cov: 11012 ft: 16550 corp: 6/66b lim: 13 exec/s: 121 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:09:38.611 [2024-07-25 15:57:56.495153] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:38.611 [2024-07-25 15:57:56.495184] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:38.870 #122 NEW cov: 11012 ft: 16818 corp: 7/79b lim: 13 exec/s: 122 rss: 74Mb L: 13/13 MS: 1 CMP- DE: "\000\000"- 00:09:38.870 [2024-07-25 15:57:56.685950] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:38.870 [2024-07-25 15:57:56.685977] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:38.870 #128 NEW cov: 11012 ft: 17145 corp: 8/92b lim: 13 exec/s: 128 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:09:39.128 [2024-07-25 15:57:56.868808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:39.128 [2024-07-25 15:57:56.868836] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:39.128 #134 NEW cov: 11012 ft: 17197 corp: 9/105b lim: 13 exec/s: 134 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:09:39.128 [2024-07-25 15:57:57.060798] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:39.128 [2024-07-25 15:57:57.060826] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:39.387 #135 NEW cov: 11019 ft: 17334 corp: 10/118b lim: 13 exec/s: 135 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:09:39.387 [2024-07-25 15:57:57.252191] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:39.387 [2024-07-25 15:57:57.252219] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:39.387 #140 NEW cov: 11019 ft: 17349 corp: 11/131b lim: 13 exec/s: 70 rss: 74Mb L: 13/13 MS: 5 CrossOver-CopyPart-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:09:39.387 #140 DONE cov: 11019 ft: 17349 corp: 11/131b lim: 13 exec/s: 70 rss: 74Mb 00:09:39.387 ###### Recommended dictionary. ###### 00:09:39.387 "\000\000" # Uses: 2 00:09:39.388 ###### End of recommended dictionary. ###### 00:09:39.388 Done 140 runs in 2 second(s) 00:09:39.647 [2024-07-25 15:57:57.382945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:39.647 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:09:39.647 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:39.907 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:39.907 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:39.907 15:57:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:09:39.907 [2024-07-25 15:57:57.665469] Starting SPDK v24.09-pre git sha1 5efb3b7d9 / DPDK 24.03.0 initialization... 00:09:39.907 [2024-07-25 15:57:57.665541] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178630 ] 00:09:39.907 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.907 [2024-07-25 15:57:57.741874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.907 [2024-07-25 15:57:57.817302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.166 INFO: Running with entropic power schedule (0xFF, 100). 00:09:40.166 INFO: Seed: 836146495 00:09:40.166 INFO: Loaded 1 modules (356353 inline 8-bit counters): 356353 [0x298918c, 0x29e018d), 00:09:40.166 INFO: Loaded 1 PC tables (356353 PCs): 356353 [0x29e0190,0x2f501a0), 00:09:40.166 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:40.166 INFO: A corpus is not provided, starting from an empty corpus 00:09:40.166 #2 INITED exec/s: 0 rss: 66Mb 00:09:40.166 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:40.166 This may also happen if the target rejected all inputs we tried so far 00:09:40.166 [2024-07-25 15:57:58.067437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:09:40.166 [2024-07-25 15:57:58.143589] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:40.166 [2024-07-25 15:57:58.143620] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:40.425 NEW_FUNC[1/658]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:09:40.425 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:40.425 #4 NEW cov: 10940 ft: 10937 corp: 2/10b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:40.683 [2024-07-25 15:57:58.461664] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:40.684 [2024-07-25 15:57:58.461704] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:40.684 NEW_FUNC[1/3]: 0x1177cd0 in spdk_nvmf_request_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4639 00:09:40.684 NEW_FUNC[2/3]: 0x1178090 in spdk_thread_exec_msg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/thread.h:546 00:09:40.684 #15 NEW cov: 10983 ft: 14229 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:09:40.684 [2024-07-25 15:57:58.664820] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:40.684 [2024-07-25 15:57:58.664848] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:40.942 #26 NEW cov: 10983 ft: 15393 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:09:40.942 [2024-07-25 15:57:58.857551] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:40.942 [2024-07-25 15:57:58.857581] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:41.201 NEW_FUNC[1/1]: 0x1a57420 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:41.201 #27 NEW cov: 11000 ft: 15993 corp: 5/37b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:41.201 [2024-07-25 15:57:59.057683] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:41.201 [2024-07-25 15:57:59.057710] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:41.201 #28 NEW cov: 11000 ft: 16384 corp: 6/46b lim: 9 exec/s: 28 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:41.460 [2024-07-25 15:57:59.250415] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:41.460 [2024-07-25 15:57:59.250443] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:41.460 #31 NEW cov: 11000 ft: 16961 corp: 7/55b lim: 9 exec/s: 31 rss: 74Mb L: 9/9 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:09:41.460 [2024-07-25 15:57:59.440876] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:41.460 [2024-07-25 15:57:59.440904] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:41.719 #32 NEW cov: 11000 ft: 17850 corp: 8/64b lim: 9 exec/s: 32 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:09:41.719 [2024-07-25 15:57:59.638683] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:41.719 [2024-07-25 15:57:59.638711] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:41.978 #42 NEW cov: 11000 ft: 17954 corp: 9/73b lim: 9 exec/s: 42 rss: 74Mb L: 9/9 MS: 5 EraseBytes-ChangeBinInt-ChangeByte-ChangeBit-CopyPart- 00:09:41.978 [2024-07-25 15:57:59.819382] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:41.978 [2024-07-25 15:57:59.819410] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:41.978 #43 NEW cov: 11007 ft: 18027 corp: 10/82b lim: 9 exec/s: 43 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:42.237 [2024-07-25 15:58:00.014219] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:42.237 [2024-07-25 15:58:00.014252] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:42.237 #44 NEW cov: 11007 ft: 18480 corp: 11/91b lim: 9 exec/s: 22 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:42.237 #44 DONE cov: 11007 ft: 18480 corp: 11/91b lim: 9 exec/s: 22 rss: 74Mb 00:09:42.237 Done 44 runs in 2 second(s) 00:09:42.237 [2024-07-25 15:58:00.140954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:09:42.496 15:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:09:42.496 15:58:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:42.496 15:58:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:42.496 15:58:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:09:42.496 00:09:42.496 real 0m19.249s 00:09:42.496 user 0m27.730s 00:09:42.496 sys 0m1.674s 00:09:42.496 15:58:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.496 15:58:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:42.496 ************************************ 00:09:42.496 END TEST vfio_llvm_fuzz 00:09:42.496 ************************************ 00:09:42.496 15:58:00 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:09:42.496 00:09:42.496 real 1m23.550s 00:09:42.496 user 2m13.308s 00:09:42.496 sys 0m8.277s 00:09:42.496 15:58:00 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.496 15:58:00 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:42.496 ************************************ 00:09:42.496 END TEST llvm_fuzz 00:09:42.496 ************************************ 00:09:42.496 15:58:00 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:09:42.496 15:58:00 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:09:42.496 15:58:00 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:09:42.496 15:58:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.496 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:09:42.496 15:58:00 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:09:42.496 15:58:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:09:42.496 15:58:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:09:42.496 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:09:47.768 INFO: APP EXITING 00:09:47.768 INFO: killing all VMs 00:09:47.768 INFO: killing vhost app 00:09:47.768 INFO: EXIT DONE 00:09:50.303 Waiting for block devices as requested 00:09:50.303 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:09:50.303 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:09:50.303 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:09:50.303 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:50.562 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:50.562 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:50.562 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:50.821 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:50.821 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:50.821 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:51.080 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:51.080 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:51.080 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:51.080 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:51.339 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:51.339 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:51.339 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:51.596 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:51.596 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:51.596 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:09:56.862 Cleaning 00:09:56.862 Removing: /dev/shm/spdk_tgt_trace.pid152118 00:09:56.862 Removing: /var/run/dpdk/spdk_pid148641 00:09:56.862 Removing: /var/run/dpdk/spdk_pid150276 00:09:56.862 Removing: /var/run/dpdk/spdk_pid152118 00:09:56.862 Removing: /var/run/dpdk/spdk_pid152718 00:09:56.862 Removing: /var/run/dpdk/spdk_pid153603 00:09:56.862 Removing: /var/run/dpdk/spdk_pid153825 00:09:56.862 Removing: /var/run/dpdk/spdk_pid154734 00:09:56.862 Removing: /var/run/dpdk/spdk_pid154945 00:09:56.862 Removing: /var/run/dpdk/spdk_pid155299 00:09:56.862 Removing: /var/run/dpdk/spdk_pid155566 00:09:56.862 Removing: /var/run/dpdk/spdk_pid155834 00:09:56.862 Removing: /var/run/dpdk/spdk_pid156126 00:09:56.862 Removing: /var/run/dpdk/spdk_pid156398 00:09:56.862 Removing: /var/run/dpdk/spdk_pid156639 00:09:56.862 Removing: /var/run/dpdk/spdk_pid156869 00:09:56.862 Removing: /var/run/dpdk/spdk_pid157127 00:09:56.862 Removing: /var/run/dpdk/spdk_pid157915 00:09:56.862 Removing: /var/run/dpdk/spdk_pid160818 00:09:56.862 Removing: /var/run/dpdk/spdk_pid161061 00:09:56.862 Removing: /var/run/dpdk/spdk_pid161309 00:09:56.862 Removing: /var/run/dpdk/spdk_pid161523 00:09:56.862 Removing: /var/run/dpdk/spdk_pid161783 00:09:56.862 Removing: /var/run/dpdk/spdk_pid161993 00:09:56.862 Removing: /var/run/dpdk/spdk_pid162449 00:09:56.862 Removing: /var/run/dpdk/spdk_pid162474 00:09:56.862 Removing: /var/run/dpdk/spdk_pid162725 00:09:56.862 Removing: /var/run/dpdk/spdk_pid162925 00:09:56.862 Removing: /var/run/dpdk/spdk_pid163165 00:09:56.862 Removing: /var/run/dpdk/spdk_pid163251 00:09:56.862 Removing: /var/run/dpdk/spdk_pid163715 00:09:56.862 Removing: /var/run/dpdk/spdk_pid163950 00:09:56.862 Removing: /var/run/dpdk/spdk_pid164186 00:09:56.862 Removing: /var/run/dpdk/spdk_pid164456 00:09:56.862 Removing: /var/run/dpdk/spdk_pid164894 00:09:56.862 Removing: /var/run/dpdk/spdk_pid165310 00:09:56.862 Removing: /var/run/dpdk/spdk_pid165747 00:09:56.862 Removing: /var/run/dpdk/spdk_pid166176 00:09:56.862 Removing: /var/run/dpdk/spdk_pid166614 00:09:56.862 Removing: /var/run/dpdk/spdk_pid167041 00:09:56.862 Removing: /var/run/dpdk/spdk_pid167477 00:09:56.862 Removing: /var/run/dpdk/spdk_pid167908 00:09:56.862 Removing: /var/run/dpdk/spdk_pid168292 00:09:56.862 Removing: /var/run/dpdk/spdk_pid168660 00:09:56.862 Removing: /var/run/dpdk/spdk_pid169145 00:09:56.862 Removing: /var/run/dpdk/spdk_pid169570 00:09:56.862 Removing: /var/run/dpdk/spdk_pid170005 00:09:56.862 Removing: /var/run/dpdk/spdk_pid170643 00:09:56.862 Removing: /var/run/dpdk/spdk_pid171375 00:09:56.862 Removing: /var/run/dpdk/spdk_pid171804 00:09:56.862 Removing: /var/run/dpdk/spdk_pid172241 00:09:56.862 Removing: /var/run/dpdk/spdk_pid172671 00:09:56.862 Removing: /var/run/dpdk/spdk_pid173092 00:09:56.862 Removing: /var/run/dpdk/spdk_pid173488 00:09:56.862 Removing: /var/run/dpdk/spdk_pid173834 00:09:56.862 Removing: /var/run/dpdk/spdk_pid174211 00:09:56.862 Removing: /var/run/dpdk/spdk_pid174643 00:09:56.862 Removing: /var/run/dpdk/spdk_pid175078 00:09:56.862 Removing: /var/run/dpdk/spdk_pid175509 00:09:56.862 Removing: /var/run/dpdk/spdk_pid176012 00:09:56.862 Removing: /var/run/dpdk/spdk_pid176451 00:09:56.862 Removing: /var/run/dpdk/spdk_pid176885 00:09:56.862 Removing: /var/run/dpdk/spdk_pid177324 00:09:56.862 Removing: /var/run/dpdk/spdk_pid177754 00:09:56.862 Removing: /var/run/dpdk/spdk_pid178194 00:09:56.862 Removing: /var/run/dpdk/spdk_pid178630 00:09:56.862 Clean 00:09:56.862 15:58:14 -- common/autotest_common.sh@1451 -- # return 0 00:09:56.862 15:58:14 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:09:56.862 15:58:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.862 15:58:14 -- common/autotest_common.sh@10 -- # set +x 00:09:56.862 15:58:14 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:09:56.862 15:58:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.862 15:58:14 -- common/autotest_common.sh@10 -- # set +x 00:09:56.862 15:58:14 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:56.862 15:58:14 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:56.862 15:58:14 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:56.862 15:58:14 -- spdk/autotest.sh@395 -- # hash lcov 00:09:56.862 15:58:14 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:09:56.862 15:58:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:56.862 15:58:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:56.862 15:58:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.862 15:58:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.862 15:58:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.863 15:58:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.863 15:58:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.863 15:58:14 -- paths/export.sh@5 -- $ export PATH 00:09:56.863 15:58:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.863 15:58:14 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:56.863 15:58:14 -- common/autobuild_common.sh@447 -- $ date +%s 00:09:56.863 15:58:14 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721915894.XXXXXX 00:09:56.863 15:58:14 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721915894.GGy5FK 00:09:56.863 15:58:14 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:09:56.863 15:58:14 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:09:56.863 15:58:14 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:56.863 15:58:14 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:56.863 15:58:14 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:56.863 15:58:14 -- common/autobuild_common.sh@463 -- $ get_config_params 00:09:56.863 15:58:14 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:09:56.863 15:58:14 -- common/autotest_common.sh@10 -- $ set +x 00:09:56.863 15:58:14 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:56.863 15:58:14 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:09:56.863 15:58:14 -- pm/common@17 -- $ local monitor 00:09:56.863 15:58:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:56.863 15:58:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:56.863 15:58:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:56.863 15:58:14 -- pm/common@21 -- $ date +%s 00:09:56.863 15:58:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:56.863 15:58:14 -- pm/common@21 -- $ date +%s 00:09:56.863 15:58:14 -- pm/common@25 -- $ sleep 1 00:09:56.863 15:58:14 -- pm/common@21 -- $ date +%s 00:09:56.863 15:58:14 -- pm/common@21 -- $ date +%s 00:09:56.863 15:58:14 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721915894 00:09:56.863 15:58:14 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721915894 00:09:56.863 15:58:14 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721915894 00:09:56.863 15:58:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721915894 00:09:56.863 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721915894_collect-vmstat.pm.log 00:09:56.863 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721915894_collect-cpu-load.pm.log 00:09:56.863 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721915894_collect-cpu-temp.pm.log 00:09:57.122 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721915894_collect-bmc-pm.bmc.pm.log 00:09:58.059 15:58:15 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:09:58.059 15:58:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j88 00:09:58.059 15:58:15 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:58.059 15:58:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:58.059 15:58:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:58.059 15:58:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:58.059 15:58:15 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:58.059 15:58:15 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:58.059 15:58:15 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:58.059 15:58:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:58.059 15:58:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:09:58.059 15:58:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:58.059 15:58:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:58.059 15:58:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.059 15:58:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:58.059 15:58:15 -- pm/common@44 -- $ pid=185584 00:09:58.059 15:58:15 -- pm/common@50 -- $ kill -TERM 185584 00:09:58.059 15:58:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.059 15:58:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:58.059 15:58:15 -- pm/common@44 -- $ pid=185586 00:09:58.059 15:58:15 -- pm/common@50 -- $ kill -TERM 185586 00:09:58.059 15:58:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.059 15:58:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:58.059 15:58:15 -- pm/common@44 -- $ pid=185589 00:09:58.059 15:58:15 -- pm/common@50 -- $ kill -TERM 185589 00:09:58.059 15:58:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.059 15:58:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:58.059 15:58:15 -- pm/common@44 -- $ pid=185625 00:09:58.059 15:58:15 -- pm/common@50 -- $ sudo -E kill -TERM 185625 00:09:58.059 + [[ -n 30165 ]] 00:09:58.059 + sudo kill 30165 00:09:58.069 [Pipeline] } 00:09:58.086 [Pipeline] // stage 00:09:58.092 [Pipeline] } 00:09:58.107 [Pipeline] // timeout 00:09:58.111 [Pipeline] } 00:09:58.127 [Pipeline] // catchError 00:09:58.132 [Pipeline] } 00:09:58.146 [Pipeline] // wrap 00:09:58.152 [Pipeline] } 00:09:58.164 [Pipeline] // catchError 00:09:58.172 [Pipeline] stage 00:09:58.174 [Pipeline] { (Epilogue) 00:09:58.186 [Pipeline] catchError 00:09:58.187 [Pipeline] { 00:09:58.198 [Pipeline] echo 00:09:58.199 Cleanup processes 00:09:58.203 [Pipeline] sh 00:09:58.484 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:58.484 185772 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:09:58.484 186557 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:58.498 [Pipeline] sh 00:09:58.784 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:58.784 ++ grep -v 'sudo pgrep' 00:09:58.784 ++ awk '{print $1}' 00:09:58.784 + sudo kill -9 185772 00:09:58.796 [Pipeline] sh 00:09:59.093 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:00.481 [Pipeline] sh 00:10:00.768 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:00.768 Artifacts sizes are good 00:10:00.782 [Pipeline] archiveArtifacts 00:10:00.789 Archiving artifacts 00:10:01.214 [Pipeline] sh 00:10:01.498 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:01.512 [Pipeline] cleanWs 00:10:01.522 [WS-CLEANUP] Deleting project workspace... 00:10:01.522 [WS-CLEANUP] Deferred wipeout is used... 00:10:01.529 [WS-CLEANUP] done 00:10:01.531 [Pipeline] } 00:10:01.554 [Pipeline] // catchError 00:10:01.567 [Pipeline] sh 00:10:01.850 + logger -p user.info -t JENKINS-CI 00:10:01.859 [Pipeline] } 00:10:01.875 [Pipeline] // stage 00:10:01.881 [Pipeline] } 00:10:01.900 [Pipeline] // node 00:10:01.909 [Pipeline] End of Pipeline 00:10:01.943 Finished: SUCCESS